The abilities of future super-smart robots may still be hotly debated, but one thing seems certain: they won’t look like ethnic minorities. A new study has revealed that artificial intelligence is almost always portrayed with the characteristics of white Caucasians in popular culture. And according to the authors of the study, this increases the risk of AI research becoming ever more racially biased, with algorithms reflecting a whites-only world. Evidence of racially-biased AI has been growing for some time. Most concern surrounds facial recognition systems, which use AI methods to train computers to identify individuals. Yet despite being increasingly used for law enforcement, research has shown that commercial AI systems are startlingly prone to mis-identify people from ethnic backgrounds. In <a href="https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212" target="_blank">a 2018 study</a> by scientists at the Massachusetts Institute of Technology, AI systems failed to identify even the gender of 1 in 3 dark-skinned women, compared to just 1 in 100 light-skinned men. Cases of BAME individuals being wrongly accused of crimes based on evidence from facial recognition algorithms are <a href="https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html" target="_blank">also starting to emerge</a>. Now researchers at the University of Cambridge, England, are warning of further dangers if the association of AI with whiteness goes unchallenged. "Given that society has, for centuries, promoted the association of intelligence with White Europeans, it is to be expected that when this culture is asked to imagine an intelligent machine it imagines a White machine," said Dr Kanta Dihal, co-author of the study in the journal <em>Philosophy and Technology. </em> The portrayal of AI as being smarter than humans as well as white “could have dangerous consequences for humans that are not,” she said. Dr Dihal pointed out that celebrated examples of AI in movies from <em>Terminator </em>to <em>Ex Machina </em>are all played by white actors or portrayed as white on-screen. Even AI characters in slave-like roles - such as the rebellious replicants in <em>Blade Runner </em>– are portrayed as white. "AI is often depicted as outsmarting and surpassing humanity," said Dr Dihal. "White culture can't imagine being taken over by superior beings resembling races it has historically framed as inferior." Together with Dr Stephen Cave of the Leverhulme Centre for the Future of Intelligence (CFI), Dr Dihal found that the whiteness of AI is not only perpetuated through imagery in popular culture. “One of the most common interactions with AI technology is through virtual assistants in devices such as smartphones, which talk in standard white middle-class English,” said Dr Dihal. “Ideas of adding Black dialects have been dismissed as too controversial or outside the target market.” According to Dr Dihal, the exclusively white image of AI could also affect recruitment into the field. With AI increasingly used in applications such as recruitment and criminal justice, this could be “highly consequential", she said. “If the developer demographic does not diversify, AI stands to exacerbate racial inequality”. Such concerns are backed by evidence of bias in algorithms used to assess criminal defendants in the United States. One study <a href="https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm" target="_blank">by investigators at ProPublica</a> found that black defendants were twice as likely to be mis-classified as at higher risk of re-offending than their white counterparts. In contrast, white defendants were twice as likely to be mis-classified as posing a lower risk. The issue of biased AI is increasingly being recognised by technology companies. <a href="https://www.reuters.com/article/us-microsoft-ai-idUSKCN1RS2FV" target="_blank">Microsoft has admitted refusing to supply facial recognition systems to some clients</a> because of fears it would be biased against minorities. Increasing efforts are also being made to fix the problem. In the case of facial recognition systems, this means finding better AI training methods. These typically involve getting computers to classify thousands of publicly-available images, each tagged according to their ethnicity, gender and other defining characteristics. However, according <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9130131" target="_blank">to a new study by UAE researchers</a>, women and ethnic groups are typically under-represented in such collections of images, leading to biased outcomes. They concluded that the best hope for eliminating bias lies in using image databases for specific ethnicities and better algorithms. <em>Robert Matthews is Visiting Professor of Science at Aston University, Birmingham, UK</em>