How YouTube Middle East is combating misinformation with generative AI

Company removed more than 117,065 videos globally for violating misinformation policies in the first quarter of 2024, regional director Tarek Amin tells The National

Tarek Amin says YouTube is using generative AI for content moderation through a combination of human oversight and machine learning. Photo: YouTube
Powered by automated translation

Amid a surge of misinformation on social media sites, YouTube, the world’s largest video-sharing platform, said it is ramping up its efforts to tackle the issue using generative artificial intelligence.

The company is targeting misinformation that is spread through various means, including deepfake videos that use AI manipulation techniques like face swapping and generate realistic images, to create misleading content.

The problem has been exacerbated, not only by the rapid advancement of technology but also by recent regional events including Israel’s continuing assault on Gaza.

YouTube said it is also addressing other forms of misinformation, such as misleading thumbnails, deceptive titles, selective editing, false claims and repurposed content from unrelated events.

Tarek Amin, YouTube's regional director for the Middle East, Africa and Turkey, said cases of misinformation on the platform are rising, and the company has removed more than 117,065 videos globally for violating its misinformation policies in the first quarter of 2024. This is about 67 per cent higher than in the same quarter last year.

The company defines misinformation as “deceptive content with serious risk of egregious harm”, Mr Amin explained.

“During breaking news events and crises, what happens in the world also happens on YouTube . … that’s why stopping the spread of misinformation is one of our deepest commitments in the region,” he told The National.

Israel's war on Gaza, for example, has been the subject of many instances of breaking news-induced misinformation spreading through platforms such as TikTok, Elon Musk-owned X, Instagram, WhatsApp and YouTube.

In a deepfake video that was circulated on X on October 28, supermodel Bella Hadid appeared to apologise for her remarks supporting Palestinian rights and express support for Israel.

But the original footage was from a 2016 speech that Ms Hadid gave about her battle with a disease. The deepfake altered the visuals and audio to make it seem like she was criticising Palestine, according to AFP Fact Check.

In the same month, AI-altered footage from the video game Arma 3 was uploaded on various platforms and falsely labelled as real footage of the conflict. This misled viewers and aggravated tensions and unrest.

In the quarter ended March, more than 96 per cent of the 8.2 million videos that YouTube removed were first flagged by its automated AI-driven systems

Also, last month, a video circulating on social media, and shared more than 4,000 times on X, falsely implied that Rafah actors were preparing to stage scenes of injury in Gaza.

This misinformation campaign included AI-generated, repurposed content taken from behind-the-scenes footage of a Palestinian drama series filmed in the occupied West Bank. Although it was taken down by platforms, the manipulated content, which was widely shared and viewed millions of times, likely added to the heightened anger and tensions around the war.

Before that, during the Covid-19 pandemic, there were videos on various platforms wrongly asserting that drinking bleach could cure the virus. They were swiftly removed by YouTube to prevent misinformation and protect users.

Misinformation can cause real-world harm, such as promoting harmful remedies or treatments, certain types of technically manipulated content, or content interfering with democratic processes, Mr Amin said.

Old war, new weapons

While the spread of misinformation through social media platforms goes back decades, it has been scaled up rapidly with the use of automated systems and generative AI technologies.

Industry experts and authorities argue that platforms have failed to adequately address misinformation, leading to rumours, disbelief and injuries.

In October, Thierry Breton, the European Commissioner for the EU internal market, sent a letter to Sundar Pichai, chief executive of Alphabet, to prevent the spread of misinformation on YouTube about Israel and Gaza.

This came after a letter was also sent to X owner Elon Musk, TikTok chief executive Shou Zi Chew and Meta chief executive Mark Zuckerberg with a 24-hour deadline to halt the misinformation spread.

In her reply to the European Commission, X’s chief executive Linda Yaccarino said the platform has removed “tens of thousands of pieces of content” to minimise misinformation related to the war.

Platforms have also been condemned by social media users and human rights activists for subjugating regional voices.

In 2021, Instagram and Facebook faced a backlash for suppressing Palestinian content during protests in Sheikh Jarrah, East Jerusalem, when posts and accounts highlighting the situation were inexplicably removed or shadow banned, New York-based non-profit organisation Human Rights Watch revealed in a report.

During the same period, TikTok users reported their videos and hashtags supporting Palestine were being taken down without clear reasons, raising concerns over biased content moderation, another New York-based civic and social organisation Access Now found.

YouTube told The National it is impartially enforcing its guidelines to prevent the spread of misinformation related to the Israel-Gaza conflict while ensuring that legitimate opinions and viewpoints are not stifled.

The company does not remove content for discussing a specific topic or for sharing a viewpoint, which is particularly sensitive in instances like the Israel-Gaza conflict, said Mr Amin.

“We take our responsibility to surface authoritative news sources seriously, especially during war and conflict … the nature of crises means that there is content that is violent or graphic, which would violate our policies. However, we allow content that has educational, documentary and scientific value … like news content,” he explained.

But there are also guidelines related to news content, such as blurring graphic injuries and age-gating content that is not suitable for all viewers, when necessary, he added.

AI defence

Mr Amin said YouTube is using a slew of generative AI tools for content moderation to minimise the spread of misinformation. This is done through a combination of human judgment and machine learning, with more than 20,000 reviewers operating around the world.

In its back-end systems, AI classifiers – digital tools that are trained to categorise multimedia data into predefined classes or labels – help detect unacceptable content and human reviewers confirm whether the content has crossed policy lines, such as promoting misinformation related to violence, hate speech, or medically inaccurate information.

One of the significant areas of impact has been the identification of new forms of abuse and misinformation, Mr Amin said.

“When new threats arise, systems initially lack the context to recognise them on a large scale. However, generative AI allows YouTube to quickly broaden the data set used to train its AI classifiers, enabling faster detection of such content.”

In the quarter ended March 31, more than 96 per cent of the 8.2 million videos that YouTube removed were first flagged by its automated AI-driven systems, Mr Amin said.

Some of the AI tools used by companies such as YouTube are built using machine learning frameworks and software such as TensorFlow and PyTorch, to create deep learning models capable of analysing vast amounts of video content. They also use a mix of natural language processing tools for analysing and understanding the context of video transcripts and comments.

“It's healthy to pivot the conversation from what we need to do about generative AI to what we can do with it. We need defensive AIs to catch malicious ones,” Sam Blatteis, chief executive of The Mena Catalysts, told The National.

Analysts recommend that YouTube ensures its distribution algorithms do not promote misinformation.

“Historically, distribution algorithms have been driven by what triggers reactions and interest, which is essentially the attention economy … posing a significant risk,” Tim Gordon, co-founder and partner at the UK-based Best Practice AI, told The National.

“However, there are opportunities to improve this [algorithms] through AI. We can use AI at scale to analyse YouTube videos, understand their content, and identify those most likely to spread misinformation.”

In September, YouTube announced a series of AI resources to support video content creators globally, and following that in November, guidelines requiring users to disclose fabricated or altered content.

The company stated creators must disclose when they have produced “altered or synthetic” content using AI with an on-screen notification. Failure to disclose may result in the content being flagged or removed and habitual offenders could face account suspension.

YouTube has also set out AI principles specifically designed to protect music artists and the integrity of their work, Mr Amin said.

“It's notable that YouTube uses a combination of algorithmic and human judgment to determine if content sits on this [misinformation] spectrum,” Dev Nag, chief executive of San Francisco-based AI firm QueryPal, told The National.

“But YouTube is much more hands-off when it comes to bias, which involves more subjective framing … this will also require a hybrid approach of machines [to detect harmful content] and humans [to validate AI’s findings].”

Updated: June 23, 2024, 4:00 AM