The future we were <a href="https://www.thenationalnews.com/weekend/2023/10/20/fake-it-to-make-it-the-ai-threat-to-the-us-election/" target="_blank">warned about</a> during the artificial intelligence boom a year ago has arrived as political campaigns, activists and others use the technology's latest tools to win over voters, even bringing back politicians from the dead, in a year that will see a <a href="https://www.thenationalnews.com/world/2024/01/11/elections-2024-us-presidential-modi/" target="_blank">record number of elections</a>. Despite being in jail, Pakistan’s former Prime Minister Imran Khan used AI to not only speak to supporters, but also declare victory for his Tehreek-e-Insaf party after the February 8 general election. Just days before Indonesia's elections last week, the Golkar political party released a video featuring an AI-generated clone of the dictator Suharto, who died in 2008, to rally voters. Viewed millions of times on social media platforms, the video raised questions about the ethics of portraying dead people in the context of current events. In India, five-time chief minister of Tamil Nadu state M Karunanidhi, who died in 2018, was resurrected with the help of AI technology to appear in a video endorsing his son, MK Stalin. Recently in the US, a robocall impersonating President Joe Biden prompted the Federal Communications Commission to ban the use of AI-generated voices. The commission said it was “making voice cloning technology used in common robocall scams targeting consumers illegal … giving state attorneys general across the country new tools to go after bad actors behind the nefarious robocalls”. The tidal wave of political messaging with realistic computer-generated videos and audio, known as deepfakes, comes as the rapid development of AI technology has improved the quality of such content dramatically. One <a href="https://www.thenationalnews.com/business/technology/2023/08/02/humans-cant-detect-deepfake-speech-even-with-training/" target="_blank">recent study</a> found that humans were unable to accurately detect more than a quarter of deepfake speech samples. At the same time, these advances have exposed the shortcomings of regulatory efforts around the world on several levels. First, despite efforts and initiatives to establish global standards, implementation of international AI regulation remains elusive. There is also the seemingly timeless problem of regulators playing a game of catch-up, with a deceptive AI-generated video or audio clip receiving millions of views before any action can be taken. Further complicating regulation are the various <a href="https://www.thenationalnews.com/business/technology/2024/01/26/ai-javier-milei-speech-davos-wef/" target="_blank">implementations of AI video enhancement</a> tools to facilitate quick translations of speeches into different languages. These have prompted some to warn against any one-size-fits-all bans that might block the use of AI to broaden the audience for educational videos. Timothy Kneeland, a political science and history professor at Nazareth College in New York state, said there was no silver bullet solution on the international stage, and that the reactive rather than proactive response from various governments is not necessarily a surprise. “Think about radio … Commercial radio began in the US in the 1920s and doesn't get regulated until the mid-1920s,” he said, pointing out that regulators have not been as slow as it might seem with regard to AI. Mr Kneeland also said that although quaint, the best potential safeguard against misleading AI content could be public awareness campaigns. “You have to train the public to be aware and conscious,” he said. While the jury is still out on just how much impact deepfake political content will have on the democratic process, Mr Kneeland said voters already take political messaging with a grain of salt. Given the highly polarised political environment in many parts of the world, with persuadable swing voters few and far between, “I don't know that people necessarily want their minds changed when it comes to politics right now”, he said. Some search engines and social media platforms are attempting to stay one step ahead while also embracing the possibilities offered by AI-based media tools. In November, <a href="https://www.thenationalnews.com/business/technology/2023/11/08/facebook-parent-meta-bans-political-campaigns-from-using-generative-ai-advertising-tools/" target="_blank">Meta banned</a> political campaigns from utilising the company's generative AI advertising tools that are used by other private sector organisations. More recently, the social media giant said it would attach a disclaimer, “imagined with AI”, to content created with these tools, in addition to <a href="https://www.thenationalnews.com/business/technology/2024/02/16/major-tech-companies-sign-pact-to-fight-ai-election-interference/" target="_blank">signing an agreement </a>with industry partners such as Adobe, Microsoft, TikTok, OpenAI and others to help detect AI-generated content, though it remains to be seen how effective the accord will be. Back in September, Google's parent company, Alphabet, announced it would require the disclosure of AI-generated political advertising content. “All verified election advertisers in regions where verification is required must prominently disclose when their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events,” Google said in a post in its advertising policy section. Despite the efforts of policymakers and private companies to blunt the impact of deceptive AI content, the sheer speed of technological developments is proving to be the biggest challenge. While the earliest artificial intelligence advances revolved around text, generative AI, which can create images and video, quickly followed and has grown in sophistication by leaps and bounds. Most recently, <a href="https://www.thenationalnews.com/business/technology/2024/02/16/what-is-sora-openais-new-tool-that-creates-video-from-text/" target="_blank">OpenAI's announcement of Sora</a>, an AI model that allows users to create realistic videos from just a few lines of text, sent shock waves through the tech industry. OpenAI said the AI model would not yet be released to the public. Instead it would be looked at by cybersecurity experts “to assess critical areas for harms or risk”, as well as by “visual artists, designers and filmmakers to gain feedback on how to advance the model to be most helpful for creative professionals”. “I don't think we were as cautious when social media first came out,” Mr Kneeland said of the proceed-with-caution mentality in AI. “Sometimes it's the more subtle ways these new technologies change the human condition,” he said, adding it was still unclear how much impact deepfake content might have.