Napoleon was not known for being a good loser. But in 1809, when he was defeated (several times, in fact) by a machine in a game of chess, he reacted with uncharacteristic amusement. Contrast that with the alarm that has been expressed by tech experts in recent weeks over the near certainty with which they estimate we have entered the age of artificial intelligence, and you may wonder how the French emperor was so calm. For one thing, although Napoleon couldn’t be sure of it at the time, the machine that beat him, the “Mechanical Turk”, was little more than a confidence trick – a wooden box with fake gears that concealed a human chess playerpulling levers in a hidden compartment. For another, there was no widespread fear in early 19th century Europe more Mechanical Turks were out there, poised to render humans unemployed or even entirely servile. We are still at a moment in history where there are plenty of things only humans can do, or at least do at a lower cost. And even the rise of AI, so far, has seemingly created more jobs than it has destroyed. So-called “human-intelligence tasks” – from moderating social media posts to identifying objects in blurry photographs – are big business for multinational tech firms looking to use them to train algorithms. Hundreds of thousands of people around the world work part-time to carry them out (many of them employed through an Amazon-owned service called Mechanical Turk). But many experts fear that the scope of human-only work is fast diminishing. Companies like OpenAI (the firm behind the renowned chatbot ChatGPT) and Google’s DeepMind have brought us ever nearer to a tipping point where AI will permanently define humanity’s future. Ensuring that our development of AI does not ultimately come at our expense is a challenge that AI ethicists refer to as the “alignment problem”: how can we align AI’s goals with what’s best for us, and harness its power for human flourishing? Broadly speaking, solving alignment will require much more global co-operation. Gary Marcus, an emeritus professor at New York University, and Anka Reuel, a doctoral student at Stanford, have called for the creation of an international agency for artificial intelligence. Given wildly disparate approaches to the technology as things stand, getting everyone on the same page makes sense. But in a narrower sense, individual countries can establish their own public infrastructure devoted to putting AI near the top of the national agenda – not only to regulate it, but also to bring its most beneficial qualities to life. Too few are doing this. However, an excellent template can be found in the UAE, which rolled out a national AI strategy back in 2017 and today has both a ministerial portfolio and a public university devoted to understanding AI (two world firsts). On Saturday, Minister of Industry and Advanced Technology Dr Sultan Al Jaber said during a visit to Mohamed bin Zayed University of Artificial Intelligence, where he is chairman, that the country’s artificial intelligence research and its adoption across industries will be critical not only in achieving the country’s economic diversification goals, but also in helping it combat climate change. Partnering with the private sector is an important part of the UAE’s AI strategy. Dr Al Jaber was joined in his visit by Peng Xiao, chief executive of G42, an AI company, and was updated on a joint project between MBZUAI and computer company IBM that will use data engines to help Abu Dhabi emirate hone its climate policies. But the defining quality of the UAE’s vision as a global hub for AI research and development is that it views progress in this area fundamentally through the lens of the public good. Building a coherent AI strategy and designing the national infrastructure necessary to achieve it is a critical first step in solving the alignment problem, and to ensuring that however astonishing the technology becomes, the levers remain pulled by humans.