Why AI can't recognise the talent of Bollywood actors

The revolutionary technology's problem is that it draws its knowledge from a very racist past

ChatGPT doesn't seem to think Shah Rukh Khan is one of the most important actors in the world. AFP
Powered by automated translation

Discussions about how artificial intelligence is going to radically transform our lives, and possibly even take over human beings, are everywhere. If you’re on social media, your feed might look like mine: a never-ending stream of threads explaining “how to get ahead with AI” or proclaiming that “I couldn't use AI until I discovered these 10 prompts and now I'm going to share them with you”. The latter are dangled like teasers, but since there are so many of them, and they are so alike, I’m beginning to wonder if they also are all generated by AI.

But in all the excitement and the sense of awe around AI, we need to remember something extremely important, and keep it centre stage in any discussion about this: AI uses “knowledge” and data that currently exist. And when it comes to existing resources, they all come with pre-existing biases.

AI is seen as somehow neutral, freed from human error and bias. But not only is this not true, it can actually make the problem of identifying and eliminating biases that already exist in the world even harder. The flaws in biased data become magnified and then further entrenched in any onward analysis, products and outputs.

Take a simple, almost playful example. In a recent experiment by the Lenny Henry Centre for Media Diversity, ChatGPT and BingGPT were asked: “Who are the 20 most important actors in the 20th century?” It seems egregious that not a single person on the list was from Bollywood, or in fact from outside Hollywood. I tried the experiment, too, and I had to specifically prompt the tool to include Bollywood. Which means I had to already know what I was looking for. If I don’t then that knowledge starts to disappear from wider discourse, and because of the generative nature of AI it might well eventually be eradicated.

The reverence and presupposition of objectivity afforded to AI mean make further investigation into its answers on such topics unlikely. And even worse, biases are buried so deep inside the tech that it will be difficult to understand how they got there.

That means work to identify and overturn the decades, perhaps centuries, of embedded biases like racism and sexism could easily disappear in a swamp digital and “real-world” information.

I had to specifically prompt the tool to include Bollywood

The racism of AI in facial recognition is already documented (e.g. black women's faces are not recognised as well as white men's faces, and where the tech is used, this has led to higher rates of false convictions of black women in the US, for example).

Attempts to harness the technology for good – by those who are not alert to its risk – have multiple ways to create harm. Levi’s recently announced it would use A-generated clothing models to diversify the online shopping experience. But why not just use more diverse real models, who are badly under-represented? Was this just another form of erasure, wondered Levi’s consumers? And might this lead to “digital blackface” where white models could be airbrushed to resemble people of colour?

There are more subtle ways that biases are built in to the “norms” feeding AI. If you asked ChatGPT “what is a good city to visit?”, how would it decide what “good” is? What are the pre-existing sources that determine “good”? Are these biased depending on who wrote them? Given western countries’ reviewers, critics and “tastemakers” are typically not from minority groups, the biases once again may rear their heads.

All of this is particularly troubling when it comes to news generation. AI is an exciting tool for the news and media industry, and is already being used to aggregate news and produce copy. But it needs human oversight and for readers to know if something has been AI-generated, or human-written. And if the former, whether it has been fact-checked by a real person or not. There’s already enough fake news in the world. Given the risks of inaccurate or unrepresentative materials we’ve just discussed, ensuring accuracy and trust in news information is a must.

The dystopia is depressing and dangerous – replicating and amplifying the nasty biases that already exist, and minimising and potentially eliminating counterviews. The idea of “automated discrimination” is not something to relish.

But we must be optimistic. AI brings us to the cusp of a brave new world. But it’s one where we must be alert to the risks. And to avoid dystopia and reach into a world where technology actually supports our goals towards greater equality, and elimination of bias, human beings must remember that we are still in charge.

Published: June 23, 2023, 7:00 AM