As the <a href="https://www.thenationalnews.com/arts-culture/film-tv/2023/05/03/why-are-hollywood-writers-going-on-strike-and-what-does-it-mean-for-viewers/" target="_blank">Hollywood writers’ strike enters its fourth week</a>, amid rumblings of a looming actors' strike, we shouldn’t be surprised to see the role of AI increasingly enter the debate. <a href="https://www.thenationalnews.com/world/us-news/2023/05/02/why-did-hollywoods-writers-go-on-strike/" target="_blank">Protection for writers against AI usage</a> is one of several issues discussed between the writers’ guild and studios in recent months, but as a somewhat abstract topic, it was initially pushed into the background in favour of <a href="https://www.thenationalnews.com/arts-culture/film-tv/2023/05/02/hollywood-writers-go-on-strike-demanding-higher-pay-in-streaming/" target="_blank">more tangible concerns</a> such as declining residuals – the royalties paid for repeat screenings – in the streaming age, and job losses due to shrinking writers’ rooms industrywide. As commentators seek new angles, AI has risen to the fore. <i>The Hollywood Reporter</i>, for example,<i> </i>ran a headline earlier this month stating: “AI Could Covertly Cross the Picket Line". Conceptually this is certainly true – it already has. AI lives in every connected device on the planet, so picket lines are moot. Practically, however, it seems some way from reality. Even the most advanced current AI lacks emotional intelligence and what most writers would consider “imagination". Its creative abilities are restricted to what it can copy, while its access to facts is restricted to what it can glean from the internet – an invaluable tool when used properly, but also rife with misinformation. As if to test <i>THR</i>’s theory, <a href="https://www.thenationalnews.com/arts-culture/film-tv/2023/05/05/hollywood-insiders-fear-writers-strike-will-last-over-summer/" target="_blank">as the late-night chat shows went dark</a> at the beginning of the strike, the BBC carried out an experiment in which it asked ChatGPT to write the scripts for hypothetical episodes of Stephen Colbert and Jimmy Fallon's shows. The results weren’t terrible, but among many flaws, the jokes were Christmas-cracker-standard – the AI pulled Joe Biden’s age from an old source, de-aging him to 78, and it was unable to supply a source for poll results it had quoted. The scripts were also noted to lack an indefinable “human” element, which was the result of similar experiments that can be found around the web. The BBC’s study concluded that “rumours of Hollywood's demise at the hands of artificial intelligence appear exaggerated". That’s not to say there isn’t cause for writers’ concern, not only in the future as the tech develops, but from historical precedent. During the last writers’ strike, in 2007-2008, streamers were an emerging technology, and the residuals writers would receive from them going forward was, like AI today, one of the more peripheral items on the table. Few expected that by 2023 streaming would be the most popular home viewing platform, that some of the biggest players – Amazon, Apple, Netflix – would be primarily technology companies, rather than traditional studios, and the residuals agreement reached would be utterly inappropriate for the new landscape. AI may just be emerging, but it would be wise to avoid repeating the same mistakes. Outside of scriptwriting, AI is already widely used throughout the industry. You witness it at home, every time Netflix’s algorithms study your viewing habits and suggest what you might like next to keep you glued to the screen. Studios too are turning to algorithms for commissioning over traditional focus groups, scouring the internet for what is “popular", particularly in non-fiction storytelling. There’s no doubt AI can determine what’s popular, but can it identify what’s “interesting” or identify an untold story that urgently needs to be shared – by its very nature not “popular”? There’s an undeniable human thirst for the out-of-the-ordinary, and that can’t be identified by technology designed to identify popular patterns. On the production side too, AI has made major inroads. Only last month, editing and FX giant Adobe announced that it would be integrating AI into its industry-standard Premiere Pro and After Effects tools “to improve post-production workflows and the efficiency of video editors". In animation, AI can save hours of work, automating background generation or rotoscoping – the process of manually drawing animated elements onto existing live-action sequences, such as the light sabres in the original <i>Star Wars </i>movies. These uses raise the moral dilemma of whether we should be encouraging AI to put humans out of work, but an optimistic interpretation would see AI take on the tedious, repetitive tasks, freeing up human animators to work on more creative aspects of their art. Elsewhere, AI use in the creative process has a broader moral ambiguity. When <i>The Galactic Menagerie</i>, aka <i>S</i>t<i>ar Wars by Wes Anderson</i>, went viral last month, the online world marvelled at the fun we can have with AI. When Netflix artificially recreated Andy Warhol’s voice to narrate docuseries <i>The Andy Warhol Diaries, </i>many celebrated the authenticity it added to the show. It’s not a huge step from here to the world of deep fake conspiracy theories, convincingly transplanting outlandish statements onto world leaders or celebrities, in essence authentically creating the inauthentic. The rise of AI is probably inevitable in almost every industry – that genie is out of the bottle, and it’s doubtful we could put it back if we wanted to. It seems vital that it’s used as a tool to assist humanity, not one to replace it, and certainly not one to deceive it. That may require regulation at a significantly higher level than negotiations between a union and the Hollywood studios.