When the Bletchley Declaration was agreed to by 28 nations in the UK this week, the world had, for the first time, a definition of a potentially dangerous “frontier” that <a href="https://www.thenationalnews.com/tags/artificial-intelligence/" target="_blank">artificial intelligence</a> is approaching with breakneck pace. A two-day summit issued a definition of this frontier, with the overarching issue being the potential intentional misuse or outright loss of control of a powerful autonomous system. The governments involved in the conference, including the US, EU and China, want to put the onus on developers to ensure this never happens. “Frontier AI developers also have a unique responsibility to support and enable efforts to understand AI capability and risk, including co-operation in AI safety research, and sharing data on how their systems are used,” the declaration said. During the creation of the declaration, roundtables made up of officials, industry experts and visionary exchanged views on what the challenges represent and how to tackle the dangers. Below is a selection of the summaries of the panels that discussed the issues at stake. “Current models do not present an existential risk and it is unclear whether we could ever develop systems that would substantially evade human oversight and control,” said Ms Teo. “There is currently insufficient evidence to rule out that future frontier AI, if misaligned, misused or inadequately controlled, could pose an existential threat. “This question is an active discussion among AI researchers. It may be suitable to take more substantive action in the near term to mitigate this risk. “This may include greater restrictions upon, or potentially even a pause in, some aspects of frontier AI development, in order to enjoy the existing benefits of AI whilst work continues to understand safety.” “Current models are not the answer. We need better ones,” Dame Angela said. “We need lots of research on new architectures, which are engineered to be safe by design. We have a lot to learn from safety engineering. “We need to add non-removable off switches. We need to discuss open and closed release but not too heatedly, and model size matters in that discussion. “Epistemic modesty is crucial, we have lots of uncertainty.” “While open access models have some benefits like transparency and enabling research, it is impossible to withdraw an open access model with dangerous capabilities once released,” Mr Yi said. “This merits particular concern around the potential of open access models to enable AI misuse, though an open discussion is needed to balance the risks and benefits.” “Frontier AI companies have started to put some safeguards around their models, but this needs to be complemented by government action,” Mr Champagne said. “There is a need to work together across governments, industry and experts, especially on testing. “The risks these AI systems pose to the public are significant. It is urgent that we both research and discover ways to ensure current models and future models do not enable bad actors to cause harm.” “We should invest in basic research, including in governments’ own systems,” Ms Schaake said. “Public procurement is an opportunity to put into practice how we will evaluate and use technology. “We must not miss out on the opportunity to use AI to solve global problems, including strengthening democracy, overcoming the climate crisis, and addressing societal bias.