AI Bias Amplification
Concerns & RisksProvisional
Concern that AI systems trained on non-representative data reproduce and amplify existing social inequities, particularly affecting minority communities through biased feedback, culturally insensitive outputs, and failures of recognition
Evidence
“But even, you know, so, and then there's another problem with it on a broader sense where I don't trust, because I was reading research that the UN is really pushing AI in the global south for teaching and learning to create learning management systems and to give students feedback. I don't trust that because the data set that they're using is so westernized. It seems like another version of colonization and cultural, how do I say that?”
“Like making culture homogeneous, I guess. So those are some of the things I think about when I think about trust.”
“I would just say it's about our own bias. Like, we live in a world that is not fair, it's not just, it's not equitable. And how is AI amplifying that in the world? That would be my only concern. Without checks, is it just, like, I've read a few studies where students were given feedback based on their writing and minority students were given less rigorous feedback from AI than Caucasian students. And then AI didn't recognize different dialects of English except for proper English. And then facial recognition didn't recognize Black students as human when they went in to be recognized for a test that was proctored, it didn't recognize their faces.”
“So that's the kind of thing that concerns me about AI. But I think as long as everyone has a seat at the design table and as long as we have this type of research and feedback from minority groups is used, I think we can mitigate the risk of bias.”