Giles Crouch, a digital anthropologist, recently described <a href="https://www.thenationalnews.com/tags/technology/" target="_blank">the internet</a> as a “chaotic, unruly mess”. Indeed, those who want to spend a day – or even a few minutes – online without being bombarded by unsolicited adverts, rage-inducing social media <a href="https://www.thenationalnews.com/uae/2023/05/25/dubai-resident-who-climbed-everest-reveals-how-she-had-to-conquer-online-trolls/" target="_blank">trolls </a>or an <a href="https://www.thenationalnews.com/opinion/comment/mobile-phones-are-killing-our-attention-spans-and-ruining-public-discourse-1.872749" target="_blank">overload </a>of extraneous information will be left frustrated. But the worldwide web’s disorder, Mr Crouch writes, reflects humanity. “Humans have always been messy and haphazard in the sociocultural systems that we’ve built.” Things have become even messier with the rollout last week of the latest version of <a href="https://www.thenationalnews.com/future/technology/2024/08/15/grok-2s-new-ai-image-generator-sets-off-alarm-bells-after-deepfake-free-for-all/" target="_blank">Grok</a>, a generative AI chatbot launched by <a href="https://www.thenationalnews.com/tags/elon-musk/" target="_blank">Elon Musk</a>’s social media company X. Grok is named after a term from Robert A Heinlein’s science fiction, meaning to understand deeply. The chatbot’s debut, however, has been less about spreading wisdom and more about generating controversy. The past week has seen the proliferation of unsavoury AI-crafted images across X, one of the world’s most popular social media platforms, generated using human users’ prompts. These include “<a href="https://www.thenationalnews.com/future/2024/02/21/deepfakes-ai-and-politics-can-we-trust-what-we-see-and-hear/" target="_blank">deepfakes</a>” of politicians and celebrities, and even images of children’s cartoon characters – in some cases alongside Mr Musk himself – carrying out high-school shootings. OpenAI’s <a href="https://www.thenationalnews.com/future/technology/2024/05/14/openai-chatgpt-4o/" target="_blank">ChatGPT</a>, another large-language model that is regarded as a key competitor to Mr Musk’s chatbot, introduced watermarks on its AI-generated images to try to distinguish fantasy from reality. Grok, however, largely lacks such guardrails. It is a strange turn of affairs, given Mr Musk’s previous warnings about the purported dangers of AI. On March 22 last year, the tech mogul added his name to an open letter calling for all developers to immediately pause the training of powerful artificial intelligence, claiming that “AI systems with human-competitive intelligence can pose profound risks to society and humanity”. Grok is only one among a marketplace full of powerful AI tools that pose a growing risk of misuse. The technology is moving fast, leading to more unpredictable results, while efforts at regulation or setting agreed standards are lagging behind. In July, researchers in the US developed a new benchmark to factcheck AI “hallucinations”, the phenomenon of large-language models answering user requests with false information. Given that such hallucinations can have serious consequences – a Stanford University study in January found that general-purpose chatbots hallucinated between 58 per cent and 82 per cent of the time on legal queries – it is more important than ever to harness AI’s potential responsibly. However, there is no accountability on the part of the companies responsible for misinformation coming as a result of these hallucinations. A greater sense of corporate and technological responsibility must come quickly. The widespread accessibility of increasingly sophisticated AI models is leading to a proliferation of misleading and potentially defamatory statements and images flooding the net. Grok is the latest iteration of this phenomenally powerful technology, but it won’t be the last; OpenAI's ChatGPT-5 is expected to arrive later this year or in early 2025, and promises major advancements. AI holds enormous promise, something the UAE recognised as far back as 2017 when it established the world’s first ministerial portfolio for artificial intelligence. It can largely be used for good; its potential to revolutionise everything from transport and education to healthcare and science is unsurpassed. But until we collectively learn how to develop, introduce and use such technology in constructive ways, the chaos of our online world will not only continue, it will increasingly seep into the real world, too.