Artificial intelligence systems are on track to become powerful enough to “kill many humans” within just two years, an adviser to UK Prime Minister <a href="https://www.thenationalnews.com/world/uk-news/2023/06/05/rishi-sunak-announces-two-more-barges-to-house-1000-migrants/" target="_blank">Rishi Sunak </a>has warned. Matt Clifford put a tight timeframe on the opportunity remaining for policymakers to bring AI systems into a place of control. Without urgent action, the threats posed by <a href="https://www.thenationalnews.com/world/uk-news/2023/06/05/british-airways-and-boots-hit-by-cyberattack/" target="_blank">cyberattacks</a> and the creation of bioweapons in the coming years could be exponential, he said. Mr Clifford’s chilling comments came hours before the Prime Minister jetted to Washington, where he is expected to raise the case for co-operation on addressing AI concerns. Mr Clifford, who is helping Mr Sunak establish an<a href="https://www.thenationalnews.com/climate/2023/05/18/how-ai-can-help-produce-a-new-crop-of-emirati-farmers/" target="_blank"> AI</a> taskforce, said that like the beginnings of the Covid-19 pandemic, it is easy for people to dismiss warnings about things they are unfamiliar with. He pointed to a letter signed by 350 AI experts last week forecasting the long-term potential for technology to lead to the extinction of humans. Worries are growing because there has been a “pretty striking” rate of progress over the past few years, he said. “These systems are getting more and more capable at an ever-increasing rate and if we don’t start to think about now how to regulate and how to think about safety then in two years’ time we will be finding that we have systems that are very powerful indeed,” Mr Clifford told TalkTV. Mr Clifford said if AI is created to be more intelligent than humans and it is uncontrollable there would be “all sorts of risks” to humans’ safety. He said the near-term risks alone are “pretty scary”, pointing to technology that can instigate large-scale cyberattacks. “You can have, really very dangerous threats to humans that could kill many humans, not all humans, simply from where we would expect models to be in two years’ time.” He said it was crucial for policymakers to try to find out how to control such models “because right now we don’t”. Regulation is needed on a global scale because introducing rules that apply only nationally will not cut it, he insisted. Mr Clifford said that AI, if harnessed in the right way, could be a force for good. Countries such as Russia and <a href="https://www.thenationalnews.com/world/us-news/2023/02/04/chinese-balloon-over-us-might-be-guided-by-ai-says-expert/" target="_blank">China </a>should be told this message in the hope of establishing unity, he said. “You can imagine AI curing diseases, making the economy more productive, helping us get to a carbon-neutral economy,” he said. The UK government’s Foundation Model Taskforce is investigating AI language models such as <a href="https://www.thenationalnews.com/business/technology/2023/01/25/chatgpt-what-why-controversial/" target="_blank">ChatGPT</a> and <a href="https://www.thenationalnews.com/business/2023/03/22/bard-ai-google/" target="_blank">Goodle Bard</a>, the conversational AI chat service. The letter signed by 350 AI experts last week said the risks posed by AI should be treated with the same seriousness as pandemics or nuclear war. Senior bosses at companies such as Google DeepMind and Anthropic signed the letter, along with the so-called “godfather of AI”, Geoffrey Hinton. Mr Hinton stepped down from his role at Google earlier this month, saying that in the wrong hands, <a href="https://www.thenationalnews.com/business/technology/2023/05/02/geoffrey-hinton-the-godfather-of-ai-quits-google-and-sounds-warning/" target="_blank">AI could be used to harm people and spell the end of humanity.</a> Mr Sunak last month held a meeting with tech leaders to discuss potentially “existential threats” posed by AI. During his US visit this week, he is expected to lobby US President Joe Biden for the UK to take on a leading role in AI development and suggest the idea of a global regulatory body, possibly based on the<a href="https://www.thenationalnews.com/world/uk-news/2023/06/05/iran-nuclear-monitoring/" target="_blank"> International Atomic Energy Agency.</a> Asked about Mr Clifford’s warning on Tuesday, the Prime Minister’s official spokesman told reporters: “We are not complacent about the risks of AI. Equally it does present significant opportunities for the people of the UK.” Asked if the Prime Minister intended to broach the subject of an international regulatory group in his conversations with Mr Biden, the spokesman said: “The Prime Minister has said he wants to talk about AI with the President.”