This week, senior officials from the world’s two largest economies have been in talks about how to manage a future dominated by <a href="https://www.thenationalnews.com/tags/artificial-intelligence/" target="_blank">artificial intelligence</a>. It should be taken as a positive sign that the US and China are able to get together for discussions in Geneva, but it is also worth making the point that while we wait for governments and corporations to set the rules of engagement, there is much we can do as people to prepare ourselves for the inevitable moment when AI is all-pervasive. Ask yourself what you think about how AI should be used. What are your own values when it comes to privacy, fairness and the potential environmental impact from the <a href="https://www.thenationalnews.com/opinion/comment/2024/01/25/ai-healthcare-doctors-hospitals-technology/" target="_blank">myriad applications of AI</a>? As people and professionals, we have a small window during which we can begin to decide for ourselves the boundaries of AI for our lives. We have the past decade as a frame of reference, when the benefits and costs of keeping a digital device in our hands at all times have become clearer. It affects our well-being and mental health, as well as enables us to fulfil our dreams just like in the old tales of jinn and magic lamps. The moral of those stories is still very much relevant today; such power must never be taken lightly. At <a href="https://www.thenationalnews.com/future/technology/2024/05/15/google-io-ai-gemini/" target="_blank">Google</a>’s invitation, I spent a few fascinating days at the company’s Zeitgeist conference, held just outside London. There it became very clear to me, after listening to experts from both within its ranks and beyond, that we are at the start of an era that will result in AI becoming central to our lives. It will affect how we travel, receive health care, keep ourselves safe, plan the communities in which we live and communicate with each other. There is no going back now. The ethical discussions about how and when AI should be used are decades old. Engineers and experts working in the field have grappled with the issues, but the answers will need to come from a broader cross-section of society if we are to plot a positive path forward. The emergence of Generative AI technology has accelerated and expanded the scope of the debate. According to UK government data, “a third of the public report using chatbots at least once a month in their day-to-day lives. In parallel, self-reported awareness and understanding of AI has increased across society, including among older people, people belonging to lower socio-economic grades and people with lower digital familiarity”. However, the same report suggests that “alongside this increased understanding, there are ongoing anxieties linked to AI. A growing proportion of the public think that AI will have a net negative impact on society, with words such as ‘scary’, ‘worry’ and ‘unsure’ commonly used to express feelings associated with it”. In the US, Pew Research Centre surveys showed similar findings with almost all Americans aware of the growing role of AI and a majority feeling concerned about where the technology is taking us. In essence, as the understanding grows, so will the need to ensure that public discourse is based on fact and not on fear mongering – either for or against the use of AI. For example, while AI’s potential to make misleading and fake information harder to spot is being flagged repeatedly, little has been said about the risk of disinformation about AI technology itself to society. A vacuum in terms of education about the truth of AI allows bad actors to exploit the general feelings of anxiety related to the rise of this technology. There is the risk that such malign actions will further polarise and destabilise communities. We need, therefore, to discuss it as much as possible and in an open and frank way that allows the public to become more informed about both the pros and the cons. “Prebunking”, as Google defines campaigns to help people identify and resist manipulative content in advance of misinformation being put in the public domain, will be increasingly necessary. But who should be responsible for this? A mix of government and corporate entities? A better question might be to ask how people can take control of their destiny and educate themselves about the realities of AI rather than wait for governments and companies to choose for them. During the Covid-19 pandemic, for example, while we of course adhered to public safety rules and regulations, we each had to navigate the day-to-day of what it meant to stay safe, how to balance physical and mental health, and make decisions such as which vaccine to take and when. It was very stressful, to say the least, and moreover we had to do it in a compressed period due to the nature of the crisis. At the moment, when it comes to AI, we have the relative luxury of time, but what is at stake is no less serious. Using AI can enhance the good that we do, but it could also further deepen the negative aspects of society. It is not just a matter of regulation. It is an ethical and moral subject as much as a practical one.