Uncontrolled AI systems could disrupt entire industries, either by displacing jobs or by making critical, unregulated decisions. Getty Images
Uncontrolled AI systems could disrupt entire industries, either by displacing jobs or by making critical, unregulated decisions. Getty Images
Uncontrolled AI systems could disrupt entire industries, either by displacing jobs or by making critical, unregulated decisions. Getty Images
Uncontrolled AI systems could disrupt entire industries, either by displacing jobs or by making critical, unregulated decisions. Getty Images


We are at the make-or-break moment on AI regulation


  • English
  • Arabic

October 26, 2024

At the recent Global Future Councils meeting, the UAE’s artificial intelligence minister issued a stark warning: without proper safeguards, AI could spiral out of control. “We do not have time to afford to wait for this to get out of hand,” Omar Al Olama said at the World Economic Forum event.

He called for proactive regulation to prevent the repetition of past mistakes, noting that governments are only now addressing the fallout from social media more than two decades after its rise. Mr Al Olama’s comments have thrust the risks of AI into the spotlight. We are approaching a critical juncture, where AI could act beyond human control. That could bring about significant harm to business and society.

This is why I launched the AI Safety Clock in September, to ignite a necessary conversation about the risks and opportunities posed by AI. I wish to raise awareness rather than alarm. Currently, the clock rests at 29 minutes to midnight, signalling that while catastrophe is not imminent, the risks are far from distant.

The implications for businesses touch every aspect of operations, strategy and ethics. As these technologies grow more autonomous and sophisticated, companies must not only consider the efficiency gains and competitive advantages they offer, but also the long-term risks.

Regulation, or the lack thereof, will be important in determining how close we are to the tipping point when these AI systems are beyond human control

Uncontrolled AI systems could disrupt entire industries, either by displacing jobs or by making critical, unregulated decisions that could have an impact on everything from supply chains to consumer trust.

Moreover, businesses that fail to introduce responsible AI governance risk a regulatory backlash, reputational damage or legal liabilities. I believe that organisations should be investing in ethical AI frameworks and collaborating with regulators to ensure that innovation does not come at the cost of social stability.

The risks are complex and wide-ranging, rooted in the possibility that AI systems could one day surpass human intelligence across multiple domains and make decisions independently. This is no longer the realm of science fiction, according to Elon Musk.

“My guess is that we’ll have AI that is smarter than any one human probably around the end of next year,” the business mogul, who runs Tesla, X and SpaceX, said recently. Others, such as OpenAI’s chief executive Sam Altman and Meta’s Yann LeCun, believe that it will take a bit longer, up to a decade.

The most visible and alarming dangers are tied to the possibility of AI gaining control over physical infrastructure. AI systems, integrated into military technology or power grids, could pose a major threat if they make unsupervised decisions about critical resources like nuclear arsenals or energy networks.

Beyond those physical dangers lies the more subtle – yet equally concerning – risk of economic manipulation and mass surveillance. As these technologies become more integrated into financial systems, there is the potential for AI to interfere with global markets or political processes.

The growing use of AI in social media and financial transactions raises the spectre of technology being used to destabilise economies or influence elections – issues that have already surfaced in recent years, such as the Cambridge Analytica scandal during the 2016 US presidential race.

Another major concern is the impact of AI on employment. While automation has been displacing jobs for years, the advent of generative AI that churns out content in seconds could accelerate this trend.

The World Economic Forum’s Future of Jobs Report 2023 predicts that technologies like AI could eliminate 83 million jobs by 2027, while creating 69 million new roles, resulting in a net loss of 14 million jobs. This poses a serious risk to social stability.

The spread of misinformation through deepfakes and AI-generated content is yet another clear and present danger. Already, we are witnessing the growing use of AI to create convincing, yet false, media that can influence public opinion.

Regulation, or the lack thereof, will be important in determining how close we are to the tipping point at which these AI systems are beyond human control. While technology drives us forward, regulation has the potential to slow down the clock.

Today, global AI regulation remains fragmented and inconsistent. The recent veto of an AI safety bill in California highlights the tension between innovation and control – without a unified regulatory framework, especially among major global players like the US, Europe and China, AI development could continue at a dangerous pace. International collaboration is needed to ensure that safety measures keep up.

Governments need to work together to create an international framework for AI governance, like existing bodies that oversee nuclear or chemical weapons. Regulatory frameworks should be designed to manage the risks without stifling innovation. One important thing will be to have a kill-switch designed to allow humans to shut down an AI system if it begins to operate in an uncontrolled or dangerous way.

Corporations, too, have a responsibility to manage these risks. Technology companies developing AI systems, like OpenAI and Google, need to prioritise safety and ethical considerations from the outset. This means integrating responsible practices into every stage of the development process. Internal governance structures should also include teams focused on assessing potential risks.

In the broader AI research community, there is no consensus on how close we are to developing uncontrolled AI, with some experts suggesting it could happen in a matter of years and others arguing it may never happen. However, the lack of certainty is itself a reason to act now. Governments, corporations and researchers should collaborate to ensure that as AI grows more powerful, it remains under human control.

The AI Safety Clock serves as a stark reminder that while we may not be on the brink of disaster, the time to act is now.

Michael Wade is the Tonomus professor of strategy and digital at IMD and director of the Tonomus Centre for Digital and AI Transformation

Living in...

This article is part of a guide on where to live in the UAE. Our reporters will profile some of the country’s most desirable districts, provide an estimate of rental prices and introduce you to some of the residents who call each area home. 

Mercer, the investment consulting arm of US services company Marsh & McLennan, expects its wealth division to at least double its assets under management (AUM) in the Middle East as wealth in the region continues to grow despite economic headwinds, a company official said.

Mercer Wealth, which globally has $160 billion in AUM, plans to boost its AUM in the region to $2-$3bn in the next 2-3 years from the present $1bn, said Yasir AbuShaban, a Dubai-based principal with Mercer Wealth.

Within the next two to three years, we are looking at reaching $2 to $3 billion as a conservative estimate and we do see an opportunity to do so,” said Mr AbuShaban.

Mercer does not directly make investments, but allocates clients’ money they have discretion to, to professional asset managers. They also provide advice to clients.

“We have buying power. We can negotiate on their (client’s) behalf with asset managers to provide them lower fees than they otherwise would have to get on their own,” he added.

Mercer Wealth’s clients include sovereign wealth funds, family offices, and insurance companies among others.

From its office in Dubai, Mercer also looks after Africa, India and Turkey, where they also see opportunity for growth.

Wealth creation in Middle East and Africa (MEA) grew 8.5 per cent to $8.1 trillion last year from $7.5tn in 2015, higher than last year’s global average of 6 per cent and the second-highest growth in a region after Asia-Pacific which grew 9.9 per cent, according to consultancy Boston Consulting Group (BCG). In the region, where wealth grew just 1.9 per cent in 2015 compared with 2014, a pickup in oil prices has helped in wealth generation.

BCG is forecasting MEA wealth will rise to $12tn by 2021, growing at an annual average of 8 per cent.

Drivers of wealth generation in the region will be split evenly between new wealth creation and growth of performance of existing assets, according to BCG.

Another general trend in the region is clients’ looking for a comprehensive approach to investing, according to Mr AbuShaban.

“Institutional investors or some of the families are seeing a slowdown in the available capital they have to invest and in that sense they are looking at optimizing the way they manage their portfolios and making sure they are not investing haphazardly and different parts of their investment are working together,” said Mr AbuShaban.

Some clients also have a higher appetite for risk, given the low interest-rate environment that does not provide enough yield for some institutional investors. These clients are keen to invest in illiquid assets, such as private equity and infrastructure.

“What we have seen is a desire for higher returns in what has been a low-return environment specifically in various fixed income or bonds,” he said.

“In this environment, we have seen a de facto increase in the risk that clients are taking in things like illiquid investments, private equity investments, infrastructure and private debt, those kind of investments were higher illiquidity results in incrementally higher returns.”

The Abu Dhabi Investment Authority, one of the largest sovereign wealth funds, said in its 2016 report that has gradually increased its exposure in direct private equity and private credit transactions, mainly in Asian markets and especially in China and India. The authority’s private equity department focused on structured equities owing to “their defensive characteristics.”

COMPANY%20PROFILE
%3Cp%3ECompany%20name%3A%20CarbonSifr%3Cbr%3EStarted%3A%202022%3Cbr%3EBased%3A%20Dubai%3Cbr%3EFounders%3A%20Onur%20Elgun%2C%20Mustafa%20Bosca%20and%20Muhammed%20Yildirim%3Cbr%3ESector%3A%20Climate%20tech%3Cbr%3EInvestment%20stage%3A%20%241%20million%20raised%20in%20seed%20funding%3Cbr%3E%3C%2Fp%3E%0A
Results

6.30pm: Dubai Millennium Stakes Group Three US$200,000 (Turf) 2,000m; Winner: Ghaiyyath, William Buick (jockey), Charlie Appleby (trainer).

7.05pm: Handicap $135,000 (T) 1,600m; Winner: Cliffs Of Capri, Tadhg O’Shea, Jamie Osborne.

7.40pm: UAE Oaks Group Three $250,000 (Dirt) 1,900m; Winner: Down On Da Bayou, Mickael Barzalona, Salem bin Ghadayer.

8.15pm: Zabeel Mile Group Two $250,000 (T) 1,600m; Winner: Zakouski, James Doyle, Charlie Appleby.

8.50pm: Meydan Sprint Group Two $250,000 (T) 1,000m; Winner: Waady, Jim Crowley, Doug Watson.

PREMIER LEAGUE STATS

Romelu Lukaku's goalscoring statistics in the Premier League 
Season/club/appearances (substitute)/goals

2011/12 Chelsea: 8(7) - 0
2012/13 West Brom (loan): 35(15) - 17
2013/14 Chelsea: 2(2) - 0
2013/14 Everton (loan): 31(2) - 15
2014/15 Everton: 36(4) - 10
2015/16 Everton: 37(1) - 18
2016/17 Everton: 37(1) - 25  

Real Madrid 1
Ronaldo (87')

Athletic Bilbao 1
Williams (14')

Trump v Khan

2016: Feud begins after Khan criticised Trump’s proposed Muslim travel ban to US

2017: Trump criticises Khan’s ‘no reason to be alarmed’ response to London Bridge terror attacks

2019: Trump calls Khan a “stone cold loser” before first state visit

2019: Trump tweets about “Khan’s Londonistan”, calling him “a national disgrace”

2022:  Khan’s office attributes rise in Islamophobic abuse against the major to hostility stoked during Trump’s presidency

July 2025 During a golfing trip to Scotland, Trump calls Khan “a nasty person”

Sept 2025 Trump blames Khan for London’s “stabbings and the dirt and the filth”.

Dec 2025 Trump suggests migrants got Khan elected, calls him a “horrible, vicious, disgusting mayor”

In%20the%20Land%20of%20Saints%20and%20Sinners
%3Cp%3E%3Cstrong%3EDirector%3A%20%3C%2Fstrong%3ERobert%20Lorenz%3C%2Fp%3E%0A%3Cp%3E%3Cstrong%3EStarring%3A%3C%2Fstrong%3E%20Liam%20Neeson%2C%20Kerry%20Condon%2C%20Jack%20Gleeson%2C%20Ciaran%20Hinds%3C%2Fp%3E%0A%3Cp%3E%3Cstrong%3ERating%3A%20%3C%2Fstrong%3E2%2F5%3C%2Fp%3E%0A
Confirmed%20bouts%20(more%20to%20be%20added)
%3Cp%3ECory%20Sandhagen%20v%20Umar%20Nurmagomedov%0D%3Cbr%3ENick%20Diaz%20v%20Vicente%20Luque%0D%3Cbr%3EMichael%20Chiesa%20v%20Tony%20Ferguson%0D%3Cbr%3EDeiveson%20Figueiredo%20v%20Marlon%20Vera%0D%3Cbr%3EMackenzie%20Dern%20v%20Loopy%20Godinez%0D%3Cbr%3E%3C%2Fp%3E%0A%3Cp%3ETickets%20for%20the%20August%203%20Fight%20Night%2C%20held%20in%20partnership%20with%20the%20Department%20of%20Culture%20and%20Tourism%20Abu%20Dhabi%2C%20went%20on%20sale%20earlier%20this%20month%2C%20through%20www.etihadarena.ae%20and%20www.ticketmaster.ae.%0D%3Cbr%3E%3C%2Fp%3E%0A

Know your camel milk:
Flavour: Similar to goat’s milk, although less pungent. Vaguely sweet with a subtle, salty aftertaste.
Texture: Smooth and creamy, with a slightly thinner consistency than cow’s milk.
Use it: In your morning coffee, to add flavour to homemade ice cream and milk-heavy desserts, smoothies, spiced camel-milk hot chocolate.
Goes well with: chocolate and caramel, saffron, cardamom and cloves. Also works well with honey and dates.

Updated: February 11, 2025, 6:49 AM