<span>W</span><span>hen the perpetrator of the Christchurch attacks made the decision to broadcast his </span><span>atrocity live on Facebook, he knew it would provoke the worst kind of human curiosity. He also knew that such curiosity would be intensified by people sharing the footage, and that social media platforms geared towards making material "go viral" would play their part.</span> <span>YouTube was reportedly seeing one upload of the 17-minute video every second</span><span>. Facebook later announced that it had removed 1.5 million copies in the 24 hours following the massacre. These platforms, built by engineers and mathematicians, were once again forced to make snap judgments relating to social responsibility.</span><span> I</span><span>t's part of an ongoing struggle to make such decisions transparent, timely and, above all, correct.</span> <span>"These are firms founded in the United States, in Silicon Valley and their notion of free speech comes out of a bastion of cyber libertarianism," says Sarah P. Roberts, assistant professor of information studies at UCLA. </span><span>In other words, they believe that information wants to be free, and they are merely a conduit. </span> <span>The stated aim of Facebook's </span><span>chief executive, Mark Zuckerberg, is to give people the power "to share anything they want". But when personal expression manifests itself in shocking, violent and graphic ways, demands are inevitably placed on Facebook, Reddit and others to judge what is and is not acceptable.</span> <span>"They don't want to be there," says Mark MacCarthy, </span><span>senior </span><span>fellow at Georgetown Law and Business School in Washington. "They don't want to be making these tricky judgments, and if I were in their shoes, I wouldn't either. But they can't go back to a posture where they claim to just be platform providers with no say over this."</span> <span>As a result, deeply complex issues have had to be distilled into a series of yes-or-no judgements. A few weeks ago, </span><span><em>The New York Times </em></span><span>reported on the existence of complex moderation rulebooks, hundreds of pages long, that attempt to define what Facebook users are allowed to post. The ad hoc way those rules have been assembled has resulted in decisions that some claim are misguided, ignorant of local issues and </span><span>lacking cultural nuance. While all but the most extreme libertarians will have approved of the decision by the major social media platforms to block videos of the Christchurch killings, such judgements are rarely that clear cut.</span> <span>In a recent documentary about social media moderation called </span><span><em>The Cleaners</em></span><span>, a former Google lawyer, Nicole Wong, described the formation of the policy surrounding the footage of the execution of former Iraq president Saddam Hussein: the video of his hanging was kept online "for historical purposes", while footage of the dead body was removed. "I've no idea if we made the right decision," </span><span>Wong said. "History will tell us."</span> <span>When formulated, these policies have to be implemented by contracted third-party companies who in turn hire thousands of moderators to manually review as many as 25,000 disturbing images and videos every day. </span><span><em>The Cleaners</em></span><span> addresses how the mental health of these workers has been blighted by their exposure to distressing material, and how their concerns are unheeded by target-driven bosses who are unwilling to provide any additional resources. ("It's your job to look at child pornography, you signed a contract," was one boss's response).</span> <span>From Arizona to Manila, moderators have experienced workplace breakdowns and </span><span>post-traumatic symptoms as a result of looking at horrific material, just so the rest of us don't have to. It's a terrible job, but someone's got to do it; as one moderator says in the documentary: "Algorithms can't do what we do."</span> <span>Algorithms do, however, have considerable powers. Sophisticated fingerprinting technology, of the kind that can identify a song playing on the radio or </span><span>help remove copyrighted material, was used extensively in the aftermath of the Christchurch killings, and it quickly blocked hundreds of thousands of copies of the video at source. But there's a marked difference between this kind of artificial intelligence</span><span> and using it to assess newly produced images and videos for offensive content.</span> <span>In recent months Facebook has introduced tools to detect so-called revenge porn, and Google has released software to help curtail the spread of child sexual abuse material</span><span>, but AI is currently incapable of making sophisticated judgements about, say, whether video containing guns is either news footage or terrorist propaganda. </span><span>Zuckerberg </span><span>told the US Congress that AI holds the key to successful moderation, but some say that this merely passes the buck to machines that will never possess this capability.</span> <span>“There is a value to outsourcing some of the worst work to machines to avoid human eyeballs being exposed,” says Roberts, “but machines don’t make judgment calls as we think of them. They don’t consider factors beyond what they’re programmed to do. As somebody recently said to me: whatever the AI is doing, it’s not watching videos! </span> <span>“That was such a good encapsulation of the limits of these tools.”</span> <span>After Christchurch, YouTube and Facebook were </span><span>forced to temporarily escalate the role of AI </span><span>in their systems. "They were losing control of the situation," says MacCarthy, "and so they got rid of material automatically." This "better safe than sorry" approach, however, removed many videos completely unrelated to Christchurch. </span> <span>"The other problem," </span><span>MacCarthy says, "is that this may not be something they reserve for just emergency circumstances." In this scenario, AI becomes a blunt tool of censorship – one that is completely antithetical to Silicon Valley's libertarian</span><span> values.</span> <span>For companies to decide what we should and should not see is evidently anti-democratic, not least because they can be (and are) subject to</span><span> government influence. </span> <span>Machines deciding what we should and should not see bestows </span><span>them with a level of power and control that we find unacceptable. And yet platforms have been created that require these things to happen for us to avoid seeing footage of some of the worst acts a human being can perpetrate.</span> <span>Should social media </span><span>companies save us from our own worst impulses, or allow us to follow them, and suffer all the unknown consequences? It's a problem they don't want. But it's theirs to solve.</span>