One of the first introductions to deepfakes for audiences around the world was in the 1994 film <i>Forrest Gump</i>, when the titular character meets President John F Kennedy. The interaction, which seemingly brought actor Tom Hanks face to face with a resurrected JFK in a brief exchange, took many months of work by skilled artists to pull off. Less than 30 years later, a computer can produce the same effect in seconds, with no major investment in time or treasure required, according to researchers at Canada's McGill University writing in <i>Navigating Fake News, Alternative Facts and Misinformation in a Post-Truth World.</i> Today, deepfakes in film and entertainment are so common they have the simultaneous effect of going completely unnoticed ― mistaken for the real thing ― or going viral for that very reason: the effect is so real, many recent examples of deepfakes have hit headlines and tallied millions of views on TikTok and YouTube. Deepfakes, also referred to as synthetic media, are computer-generated voices, images and videos created from a combination of AI and machine learning techniques. These are made up of a "generator" that creates samples and a "discriminator" that attempts to differentiate between these samples and real-world samples, according to Venture Beat. Top-performing machine learning models can therefore create realistic portraits of people who do not exist, or images of fictional apartment buildings or replications of public figures doing fictional things. While creating realistic-looking deepfake videos and images still requires advanced technical knowledge, as these AI systems proliferate, the industry is set to explode because the production of synthetic media will be less exclusive. As consumers, our relationship with music, film and advertising may change as the scale of a performer's work is exponentially expanded through synthetic media production. UK company Synthesia did two commercials with the rapper Snoop Dogg that were so successful one of the company's subsidiaries wanted the same ad. Instead of reshooting, Synthesia used deepfake technology to change Snoop Dogg’s mouth movements to match the subsidiary’s name in the new commercial. The creator of the viral Tom Cruise deepfakes on TikTok found such mainstream success that he created his own company, Metaphysic, which allows anyone to create a synthetic avatar of themselves. Another rapper, Kendrick Lamar, debuted a music video this year for his single <i>The Heart Part 5</i> that relies on deepfake celebrity faces superimposed on the artist as he performs. Actor Mark Hamill was made to look like a younger Luke Skywalker using synthetic media production in the recent Season 2 finale of <i>The Mandolorian</i>. And English footballer David Beckham got to skip learning new languages for his Malaria No More campaign, where deepfakes were used to help him get his message across in nine different languages. As examples of this emerging technology proliferate, audiences may begin to wonder where talent ends and computer-generated data begins. "Cirque du Soleil is so nice because of the artistic element in it, the acrobats doing it, how we are very amazed by these flexible bodies," Ping Shung Koo, co-founder of Data Science Rex in Singapore and an artificial intelligence expert, told <i>The National</i>. "But what if I now have Cirque du Soleil being performed by robots instead? Would that still capture that attention, that imagination, that wildness, and all that it needs to capture? Will we pay for it?" AI-generated video, imagery and audio demand the same sort of questions. "When it comes to AI-manipulated media, there's no single tell-tale sign of how to spot a fake," according to the MIT Media Lab, <a href="https://www.media.mit.edu/projects/detect-fakes/overview/" target="_blank">which ran an experiment called DetectFakes</a> to develop some tips and tricks for what to look out for. The Media Lab suggests, after training neural networks on 100,000 deepfake videos and 19,154 real videos to detect manipulated media, that viewers pay close attention to the face, which is what the majority of deepfake producers have manipulated. More specifically, look to see if the skin is too smooth or wrinkly, because a mismatch in skin ageing is a telltale sign the face has been digitally altered. Another thing to look at is if there are glasses, because AI-generated media has trouble with depicting glare and how lighting naturally works. Deepfakes also still have a tough time with facial hair, so see if that looks real. Blinking is another giveaway, according to the Media Lab. If a person blinks too much or too little, it is possibly a deepfake. One of the more fun ways to explore how deepfakes work is by jumping on Dall-E 2, a product of OpenAI. The AI system can create realistic images and art from a description. But is it art? "Do submarines swim?" this is how Chris Dixon, a general partner at venture capital outfit Andreeson Horowitz in Silicon Valley, described this current moment in technology in an interview with <i>The New York Times</i>, as machines take on more and more tasks once thought to be the purview of human beings. "It feels like semantics. They certainly move fast in water. I don’t know if they actually say it’s swimming." Whether or not Dall-E 2 is producing art is perhaps also a matter of semantics. "We’re certainly headed for very advanced machines," Mr Dixon said.