G42 continues to collaborate with the world's leading technology companies for the development of responsible artificial intelligence. Leslie Pableo for The National
G42 continues to collaborate with the world's leading technology companies for the development of responsible artificial intelligence. Leslie Pableo for The National

Abu Dhabi’s G42 launches AI safety framework amid global concern over regulation risks



UAE artificial intelligence company G42 has launched its Frontier AI Safety Framework, as it seeks to ensure the responsible development and deployment of the booming technology amid perceived risks and safety concerns.

The framework aims to adapt to emerging AI trends and will help the Abu Dhabi-based company in structuring an AI strategy backed by "rigorous" standards for security, autonomy and ethical consideration across its operational domains, G42 said in the publication released on Thursday.

It "sets clear protocols for risk assessment, governance and external oversight to ensure the safe and responsible development of advanced AI models", G42 said.

It also makes G42 one of the first AI companies in the Middle East to introduce a comprehensive AI safety framework, reinforcing its role as the UAE's leading AI firm, with the potential of serving as a model for other entities to follow suit.

"The framework is tailored specifically to address the unique challenges and risks associated with high-capability AI, building on industry best practices and aligning with established responsible AI principles," G42 said in the publication.

It "emphasises proactive risk identification and mitigation, centring on capability monitoring, robust governance, and multi-layered safeguards to ensure powerful AI models are both innovative and safe".

In addition, the systematic approach to early threat detection and risk management would support G42 in "unlocking the benefits of frontier AI safely and ethically", it added.

The framework is also a culmination of the pledge G42 had made to the Frontier AI Safety Commitments at last year's AI Seoul Summit and the Bletchley Declaration, which is a call for international co-operation to manage AI risks forged by those attending the 2023 AI Safety Summit in the UK.

“With such power comes responsibility," Peng Xiao, group chief executive of G42, said in a separate statement. "This framework reflects our commitment to AI safety, ensuring that innovation moves forward with the right safeguards in place."

AI has long been used in several segments of society but rose to the spotlight with the advent of generative AI in 2023. However, the lack of a formal governing body has spurred a number of challenges, including its potential to spread misinformation and be weaponised.

The issues with AI have become so substantial that there has been even a call to "pause giant AI experiments", because "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research", according to the Future of Life Institute, a non-profit group based in California.

Geoffrey Hinton, considered the "godfather of AI" and a pioneer of deep learning, had also spoken of the dangers some of his creations may pose when he left Google in 2023, from eliminating jobs to the threat of AI becoming sentient as it can learn on its own by self-analysing huge amounts of data.

Companies and governments have therefore been moving to rein in the technology. G42's framework, in particular, is committed to developing AI systems that "align with its principles that prioritise fairness, reliability, safety, privacy, security and inclusiveness to reflect and uphold societal values".

As detailed in the framework, the company is taking several steps to achieve this goal, starting with conducting internal risk analysis, with inputs from external AI safety experts to identify potential security capabilities across several domains, including biological risks, cyber security and autonomous operations in specialised fields.

This will lead to mitigation measures for its products that will be deployed externally, which aim to protect systems against misuse, as AI models reach higher levels of capability and risk.

G42 has also appointed a dedicated frontier AI governance board, led by its chief responsible AI officer Andrew Jackson, to oversee the endeavour. Transparency reports, both within G42 and public, and external audits will also be enforced.

The framework will be implemented in phases: the first six months will feature the establishment of foundational security protocols, and the operational rollout will be conducted within 12 months; beyond this will be "continuous improvement and expansion".

Updated: February 06, 2025, 12:39 PM