<a href="https://www.thenationalnews.com/tags/religion" target="_blank">Faith leaders</a> and civil society organisations have united to launch a platform giving the public a greater say in how <a href="https://www.thenationalnews.com/tags/artificial-intelligence" target="_blank">Artificial Intelligence</a> regulation is shaped. <a href="https://www.thenationalnews.com/tags/abu-dhabi" target="_blank">Abu Dhabi</a> Forum for Peace, which launched the commission supported by <a href="https://www.thenationalnews.com/tags/uk/" target="_blank">UK</a> consultancy Good Faith Partnership, is carving out a worldwide remit to ensure big tech companies do not steer the conversation towards existential risk, while “racing ahead” with the <a href="https://www.thenationalnews.com/tags/technology/" target="_blank">technology</a> without constraints. Concerns around data privacy, disinformation and the potential exclusion of minority communities are just some of the issues posed by AI. But they tend to be overshadowed by conversations over the technology’s long-term existential risk. The new commission into AI, Faith and Civil Society, launched in <a href="https://www.thenationalnews.com/tags/uk" target="_blank">London, </a>will address the gaps in communication between people and policymakers by arranging meetings with faith leaders and minority groups. “Civil society isn't being fully heard within the sort of AI regulatory discussions that are taking place. And so the aim of the commission is to bring those voices and faith voices to the table,” Rabbi Harris Bor, a member of the commission, told <i>The National.</i> The commission’s findings over the course of the next year will be presented to governments in the UK and <a href="https://www.thenationalnews.com/tags/uae/" target="_blank">UAE</a> with the capacity to influence technology companies. “We are building bridges of knowledge between all fields of thoughts and policymakers, not only in the UAE, but to the two billion Muslims around the world,” said Sheikh Al Mahfoudh bin Bayyah, secretary general of the Abu Dhabi Forum for Peace. By working alongside UAE universities, the commission will also seek to raise awareness of AI among Muslim communities in the Global South, so they can use it to “serve their communities”. “The bridging of knowledge is very important, especially when [many] people, particularly in <a href="https://www.thenationalnews.com/tags/islam/" target="_blank">Islam</a>, reject scientific findings,” he told <i>The National</i>. He described the commission as a “caravan of faith” that would hold meetings with representatives of <a href="https://www.thenationalnews.com/tags/africa" target="_blank">African</a> governments in Mauritania in January, and others in the Vatican and Silicon Valley next year. Such collaborations could also help mitigate the risks of AI being co-opted by extremists, he added. The UK addressed AI's security threats at its <a href="https://www.thenationalnews.com/business/technology/2023/10/31/ai-uk-how-rishi-sunaks-safety-summit-is-powering-technology/" target="_blank">landmark Safety in AI Summit in November</a>. “We need to change the narrative,” said technologist Jeremy Peckham, who spoke at the commission’s launch. Members of the commission welcomed the UK summit’s commitment to ensuring a “human-centred” approach to the technology, and the creation of the state-backed AI Safety Institute. But they feared that voices from civil society and minority communities risked being excluded. The commission highlighted that there was “no civil society representation beyond several academic institutes” at the summit, in an October letter to the government. “AI at present is incredibly technocratic. There are not voices coming in that should be heard. We know that the global south is incredibly underrepresented, and so are many aspects of society,” said Kate Devlin of Humanists UK, a British charity. Ms Devlin pointed to the impact AI bias could have on jobs, and highlighted “on the ground” issues, including outsourcing to people in developing countries for cheap content moderation, and sustainability around the technology. “The mining [is done] in countries with abysmal human rights protection,” she said. Maria Harb, from the NGO Stop the Traffik, highlighted how they used AI to combat online disinformation and save women from human trafficking and illegal organ donations – but pointed to the implications for data protection. “With these prevention programmes targeting people with social media adverts, there are implications that we must acknowledge, including data privacy concerns,” she said. Acknowledging this, Saqib Bhatti, a junior minister for Tech and the Digital Economy, stressed that the government would take a “collaborative” approach to AI. “Our vision of the future must hold our people at its heart and be shaped by the values, the collective values we share,” he said. “We're not just competing in a race, but we're sharing in a collaborative space,” he added.