Less than a month after Russia's invasion of Ukraine, a video surfaced on social media that purportedly showed Ukrainian President Volodymyr Zelenskyy urging his soldiers to surrender their arms and abandon the fight against Russia. While the lip-sync in the video appeared somewhat convincing, discrepancies in Mr Zelenskyy's accent, as well as his facial movements and voice, raised suspicions about its authenticity. Upon closer examination, a simple screenshot revealed that <a href="https://www.thenationalnews.com/opinion/comment/deepfake-videos-will-make-it-difficult-to-believe-our-eyes-and-ears-1.888768" target="_blank">the video was indeed a fake – a deepfake</a>. This marked the first known instance of a deepfake video being utilised in the context of warfare. Deepfakes are synthetic media, including audio, images, or videos, that have been manipulated and altered to falsely portray individuals saying or doing things they never actually did. On June 5, <a href="https://www.thenationalnews.com/tags/vladimir-putin/" target="_blank">Russian President Vladimir Putin</a> declared martial law and military mobilisation in the regions bordering Ukraine, announcing these measures through various Russian radio and television networks. But it was soon discovered that Mr Putin's speech was also a fabrication – a deepfake broadcast through hacked TV and radio channels. The deepfake was so convincing that it prompted Russian officials in the Belgorod region to issue warnings, cautioning the population against falling prey to the deepfake's intended to “sow panic among peaceful Belgorod residents”. The <a href="https://www.thenationalnews.com/opinion/comment/deepfake-technology-could-create-huge-potential-for-social-unrest-and-even-trigger-wars-1.755842" target="_blank">rise of deepfakes</a> serves as a vivid illustration of the exponential growth of <a href="https://www.thenationalnews.com/tags/artificial-intelligence/" target="_blank">artificial intelligence</a> and the challenges it poses to both national and international governance. Deepfake technology, fuelled by the invention in 2014 of generative adversarial networks (GANs) – a type of machine learning framework – aims to create new content by pitting two neural networks against each other in a competitive fashion. By 2018, GANs had advanced to the point where they could generate, for instance, highly realistic images of individuals who have never actually existed. In Autumn 2017, the first deepfake videos were uploaded on Reddit. These initial deepfakes involved merging the faces of Hollywood actresses onto the bodies of performers in adult videos. In less than two years, almost 15,000 deepfake videos had been identified online, with an alarming 96 per cent of them falling into the category of adult content. Moreover, 100 per cent of the victims depicted in these videos were women. Disturbingly, it was reported earlier this year that paedophiles are now employing deepfakes to create explicit images of child abuse. One paedophile in Quebec, Canada was recently convicted after the police discovered 545,000 pictures and videos of children on his computer, with 86,000 of them being deepfakes generated from real children's images collected from social media, particularly Facebook. Deepfake technology has also <a href="https://www.thenationalnews.com/weekend/2022/06/24/how-deepfakes-are-blurring-the-lines-in-art-and-film/" target="_blank">demonstrated its potential</a> for other nefarious purposes beyond exploiting individuals. It can be employed to alter medical scans, creating fake tumours or removing real ones, or manipulate satellite images to fabricate entire geographical features or deepfake geography. The implications are profound, posing risks not only to personal privacy but also to various sectors, including healthcare and national security. On November 30, 2022, OpenAI, an American artificial intelligence laboratory, <a href="https://www.thenationalnews.com/arts-culture/pop-culture/2023/05/11/in-conversation-with-google-bard-chatgpt-and-bing-ai/" target="_blank">released ChatGPT, an AI chatbot</a>. Within five days, ChatGPT garnered five million users. It took Netflix three-and-a-half years to reach the same milestone. After just two months, the application boasted 100 million users, making it the fastest-growing consumer application in history - until it was overtaken by Meta’s app Threads this month. While the first iteration of ChatGPT (ChatGPT 3.5) achieved a mediocre score (10th percentile) on the US Uniform Bar Exam, the subsequent release of ChatGPT 4 on March 14, 2023, outperformed 90 per cent of aspiring lawyers attempting to pass the bar. In a recent experiment, MIT associate professor <i>and GCSP polymath fellow</i> Kevin Esvelt and his students utilised freely accessible "large language model" algorithms like GPT-4 to devise a detailed roadmap for obtaining exceptionally dangerous viruses. In just one hour, the chatbot suggested four potential pandemic pathogens, provided instructions for generating them from synthetic DNA, and even recommended DNA synthesis companies unlikely to screen orders. Their conclusion was alarming: easy access to AI chatbots will lead “the number of individuals capable of killing tens of millions to dramatically increase”. The <a href="https://www.thenationalnews.com/opinion/comment/2023/06/22/three-breakthroughs-you-may-have-missed-amid-the-chatgpt-mania/" target="_blank">growing accessibility of generative AI </a>presents not only opportunities, but also immense risks, including targeted manipulations at the individual level. A recent study revealed that AI-generated responses to patient queries outperformed physicians' responses in terms of quality and empathy. Empathy, the intrinsically human ability to understand another person's feelings from their perspective rather than our own, is now being surpassed by chatbots. This should serve as a wake-up call for governments, as it opens the door to potential large-scale subversion campaigns and gives rise to a new form of warfare –cognitive warfare – where public opinion is weaponised to influence policy and destabilise public institutions. Generative AI and tools such as ChatGPT could be soon considered as weapons of mass deception. These examples underscore <a href="https://www.thenationalnews.com/business/technology/2023/07/13/google-launches-generative-ai-tool-bard-in-arabic/" target="_blank">the exponential pace at which AI is advancing</a>. The challenge lies in the fact that humans and organisations tend to think in a linear fashion when considering future developments. Faced with exponential growth, such as the rapid spread of the Covid-19 pandemic, many governments have often demonstrated slow and ill-suited responses. In an era defined by emerging exponential technologies, global and national governance must adapt to become more reactive and anticipatory. Strategic foresight, the ability to envision and act upon potential futures, should become a standard procedure for any organisation engaged in national and global governance. This necessitates the inclusion of diverse skills and profiles among those working within these institutions. Furthermore, effectively addressing the consequences of exponential technological transformations requires the ability to identify weak signals, highlighting the need to promote polymaths – individuals with knowledge spanning various subjects – to break free from silo thinking and groupthink. On July 18, the UN Security Council will convene its first-ever meeting to discuss the potential threats posed by artificial intelligence to international peace and security. <a href="https://www.thenationalnews.com/world/europe/2023/07/06/antonio-guterres-to-set-up-ai-advisory-group/" target="_blank">The UN already addresses certain aspects of this issue</a> through, for instance, the Governmental Group of Experts (GGE) on Lethal Autonomous Weapons (LAWS), which examines the potential impact of autonomous weapons on international humanitarian law and possible regulations or bans. However, autonomous weapons also have profound implications for strategic stability, an area hardly discussed by the GGE. AI represents a dual-use technology even more transformative than electricity, and therefore has profound international security implications. The UN Secretary-General recently expressed support for the establishment of a UN agency on AI, similar to the International Atomic Energy Agency. Such an agency, focused on knowledge and endowed with regulatory powers, could enhance co-ordination among burgeoning AI initiatives worldwide and promote global governance on AI. To succeed, however, the UN must transcend its traditional intergovernmental DNA and incorporate the scientific community, private sector (the primary source of AI innovations) and civil society into new governance frameworks, including public-private partnerships. As was mentioned in the recent UN AI for Good Summit in Geneva, the city, well endowed with a governance ecosystem conducive to such initiatives, presents an ideal venue for materialising this vision. The deepfake and generative AI quandary serves as a sobering reminder of the immense power and multifaceted security challenges posed by artificial intelligence. In the pursuit of responsible AI governance, we must prioritise the protection against malevolent exploitation while nurturing an environment that encourages ethical innovation and societal progress. Embracing strategic foresight, unshackling ourselves from linear thinking, and fostering diverse collaborations and security by design are crucial steps towards collectively shaping an AI-powered future that upholds ethical principles, preserves democratic values and secures the well-being of humanity in the face of transformative technological landscapes. By forging this path, we can pave the way for a more equitable, secure and prosperous society in the age of AI.