Executive orders, international agreements and resolutions seeking to <a href="https://www.thenationalnews.com/future/technology/2024/11/27/open-source-ai-development-uae-abu-dhabi-tii-artificial-intelligence/" target="_blank">regulate artificial intelligence</a> have made significant strides this year, but regulatory gaps remain and “AI safe havens” could undermine global progress, a Washington research conference has warned. “AI governance is not a challenge any nation can tackle alone,” Shigeo Yamada, Japan's ambassador to the US, said during his speech at the Centre for Strategic and International Studies' International AI Policy Outlook conference. “Regulatory gaps in one country could allow unregulated AI development in another, creating what we call AI safe havens. “We must acknowledge that so far we have not been able to fully engage countries with different positions, including authoritarian states, in these multilateral efforts.” Mr Yamada didn't specify any countries during his speech, but said such safe havens could allow for negative AI scenarios to come to fruition, blunting economic and societal <a href="https://www.thenationalnews.com/future/technology/2024/09/17/microsoft-to-open-its-first-middle-east-ai-for-good-lab-in-abu-dhabi/" target="_blank">positives</a>. “Risks include national security vulnerabilities, cybersecurity threats, privacy violations, the potential misuse of intellectual property,” he said at the at the Wadhwani AI centre conference. Japan has sought to take a lead in enhancing <a href="https://www.thenationalnews.com/future/technology/2024/11/22/energy-secretary-steven-chu-ai-chris-wright-trump/" target="_blank">international AI collaboration</a> as the technology quickly developed. During the <a href="https://www.thenationalnews.com/world/2023/05/18/what-is-the-g7-and-which-country-is-hosting-the-g7-summit-2023/" target="_blank">49th G7 Summit in Japan</a>, the Hiroshima AI Process initiative was announced with hopes of providing a comprehensive framework to responsibly pursue advancements, while at the same time curtailing potential problems. That initiative eventually led to the creation of the <a href="https://www.thenationalnews.com/future/technology/2024/05/07/uae-selected-for-hiroshima-ai-process-friends-group/" target="_blank">Hiroshima AI Process Friends Group</a>, which now has 54 member countries. According to a code of conduct provided by the group, organisations in the field of AI are encouraged to follow various actions throughout the course of AI development. “In designing and implementing testing measures, organisations commit to devote attention to the following risks as appropriate,” the code of conduct reads. “Chemical, biological, radiological and nuclear risks, such as the ways in which advanced AI systems can lower barriers to entry, including for non-state actors, for weapons development, design acquisition, or use … Risks from [AI] models of making copies of themselves or 'self-replicating' or training other models.” The code of conduct, according to the group, is updated periodically amid meetings and consultations with member countries and other organisations. Jennifer Bachus, the principal deputy assistant secretary for the bureau of cyberspace and digital policy at the US State Department, agreed with Mr Yamada. “Technology diplomacy is increasingly foundational to everything we do in the world,” she said, while also acknowledging the need to avoid becoming numb to the growing chorus of those warning about AI falling into nefarious hands. “Saying that there's no risk to AI also dumbs down the situation for developing countries. “They absolutely think there's a risk, and they want to know how to create a situation where they can also have the economic benefits without having national security risk.” Most speakers at the event also addressed the problem of AI increasing energy demand to power data centres required by the burgeoning technology. Mr Yamada said that he hoped photon-electron fusion, which replaces electric-based processing with energy efficient light-based processing, would help to alleviate the AI energy crunch. “We need to improve the energy efficiency of AI itself,” he said. “Now there's an effort to expand light-based processing to include computing chips and peripheral components.” Sara Cohen, Canada's deputy head of mission, also spoke about the energy concerns. “AI has a voracious appetite for energy,” she said. “From Canada's perspective it is imperative that we ensure the mainstreaming of AI in governments and workforces does not undermine our progress towards shared climate goals.” In recent years, while acknowledging <a href="https://www.thenationalnews.com/future/technology/2024/11/22/energy-secretary-steven-chu-ai-chris-wright-trump/" target="_blank">AI's increased energy consumption</a>, some researchers and proponents have also pointed to the potential efficiencies created by the technology that could help the climate, although it remains to be seen if that potential can turn into reality. The return of president-elect Donald Trump could also impact how the world co-operates with AI development and regulations. Mr Trump has promised to repeal President Joe <a href="https://www.thenationalnews.com/world/us-news/2023/10/30/biden-ai-executive-order/" target="_blank">Biden's executive order on AI development</a>. The Republican Party's 2024 convention platform, largely influenced by the Trump campaign, described the executive order as “dangerous”. Several speakers at the CSIS event said the future of global AI co-operation and regulation would be more clearer when France hosts an AI action summit in February, shortly after Mr Trump takes office. “I think it will be very interesting as to how that goes,” said Ms Bachus. “I think hopefully by that time we'll have a good sense where the Trump administration will be going on AI, what's it's imprimatur and how they're doing it, I can't predict because it's a new administration, it might work out really well or it might be really challenging.” The private sector also took part in the conference, with Aalok Mehta, Google’s responsible AI policy director, taking part in a panel discussion about AI codes of conduct with Wendy Collins, NTT Data’s chief AI officer.