Nuclear Threat Initiative says urgent steps needed to manage AI dangers

Report calls on governments to take measures to prevent a 'global biological catastrophe'

Experts have called for stricter controls on artificial intelligence developments. AP

Scientists are urging world leaders to take urgent action to address the threats being generated by artificial intelligence.

The Nuclear Threat Initiative (NTI) has recommended governments take six steps to prevent AI biological disasters from occurring in its latest report.

It comes as the UK hosts its first AI Summit this week at Bletchley Park in Milton Keynes, and only days after US President Joe Biden issued an AI executive order “to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence”.

The report, The Convergence of Artificial Intelligence and the Life Sciences: Safeguarding Technology, Rethinking Governance, and Preventing Catastrophe, says measures at national and international level are required to reduce the risks associated with new AI-bio technology without hindering scientific advances.

It is calling for an international forum to be set up, at which AI model guardrails that reduce biological risks could be shared. The report authors also suggest biosecurity controls are strengthened.

Co-author Dr Nicole Wheeler, of the University of Birmingham, said it was important such safeguards were put in place.

“Advances in AI-bio technologies can offer amazing benefits for modern bioscience and bioengineering, such as the rapid development of vaccines and new medicine, finding ways to develop new materials and fight the climate emergency," she said.

"But there is also the possibility that artificial intelligence can be used, either accidentally or deliberately, to cause harm to others on a massive scale. As this technology continues to evolve it is imperative that governments and the scientific community get a firm grasp on it to prevent this from happening.”

To develop the report’s recommendations, the authors interviewed more than 30 people with expertise in AI, biosecurity, bioscience research, biotechnology and the monitoring of emerging technology to evaluate associated risks.

“This is uncharted territory,” said report co-author Sarah Carter.

“AI-bio capabilities are developing rapidly and the rate of change will only increase. To keep up, policymakers will need to consider fundamental new approaches to governance that are more agile and adaptable.”

Dr Wheeler said the advances in AI are set to advance rapidly.

“There is a range of evolving AI tools that could be abused," she said.

"Information about how to manipulate biological systems is now easily accessible to a wide population via large language models, through applications like ChatGPT, while biological design tools could be misused to create new toxins, components of viruses, or other harmful biological materials.

“AI is also automating elements of scientific work and this technology is poised to advance dramatically and scale rapidly in the coming years.”

Jaime Yassif, vice president of NTI Global Biological Policy and Programs, is hoping the AI summit will encourage governments to take action on technological safeguards.

“Accelerating advances in AI-enabled capabilities to engineer living systems are driving dramatic changes in the global biosecurity and pandemic preparedness landscape,” she said.

“I am encouraged by the agenda set forth for the UK AI Safety Summit and the executive order from the Biden administration, both of which include important biosecurity elements.

"Once the dust settles from this week’s discussions, we will need to roll up our sleeves and get to work – so we can put into place effective safeguards to protect this technology from misuse, while harnessing its beneficial applications.”

Updated: November 01, 2023, 1:06 PM