<span>The online abbreviation TL;DR is never well received by those of us who enjoy writing. Standing for “too long; didn’t read”, it curtly informs writers that they used too many words to get their point across so have not been read at all, and that their efforts were a waste of time.</span> <span>It's unclear if Facebook appreciates how dismissive TL;DR sounds to anyone who regularly types out more than 20 words in a row, but that's the name the firm has given to a new AI assistant introduced during a company meeting last week. An audio recording of the meeting, obtained by </span><span><em>BuzzFeed News</em></span><span>, reveals that TL;DR will summarise articles – such as this one – into bullet points, saving you from having to read them. Unfortunately, dear reader, it doesn't exist yet, so you'll have to press on with this. I promise to wrap it up as quickly as I can.</span> <span>TL;DR is deemed necessary because of accelerating information overload. Scholars have complained about an excess of information for hundreds of years, but there's never been as much as there is today. It's easily generated, </span><span>distributed and the competition for eyeballs is feverish. "Attention is zero-sum because every click [one company] gains, [another] loses," writes Vedant Misra at AI research firm OpenAI in a blog post. "That's why your email inbox is a battleground of people vying for your attention. So is the results page for every Google search."</span> <span>Companies such as Facebook, Twitter and Google have been described as “resellers” of our attention. Algorithms are dedicated to finding the best way to get us to read a tweet, open a message, click on a link. But what if we didn’t have to click the link at all? </span> <span>That’s the aim of TL;DR: to save us time by automatically summarising information. In the same way we can now listen to audiobooks at higher speeds, this is about getting us to consume more information more quickly. It’s also seen as an attempt by Facebook to become the gatekeeper of that information.</span> <span>TL;DR is, however, likely to do a much better job than its antecedents, thanks to advances in natural language processing. Back in 2012, a Reddit community called AutoTLDR began using a bot to produce summaries of online articles, reducing them in size by 50 per cent or more. It worked, but it felt like more of an experiment than a service. By 2016, Facebook was describing its deep learning algorithm, DeepText, as having “near-human accuracy”, and across the field, computers were showing marked improvements at processing text and pinpointing which words were more important. One study, at the University of Maryland, used AI to digest legalese and present the meaning of lengthy “terms of service” documents in a way people could understand. Firms such as Primer began using AI to help businesses process millions of documents in multiple languages to avoid anyone having to actually read them. </span> <span>Facebook now claims that its AI systems can immediately detect 95 per cent of hate speech posted on the platform, as opposed to just 52 per cent last year. Google has reported similar advances this month; its experiments with Imperial College London show “high linguistic quality in terms of fluency and coherence”.</span> <span>But do these systems really understand the words they're processing? Do we want our skim reading to be done by a machine that merely simulates understanding? If a human being were asked to summari</span><span>se some text, we would read through it, comprehend it and put it into simpler words. Computers are hamstrung by a lack of background knowledge, and can still struggle to get over linguistic hurdles. "Even the simplest sentences can be semantic minefields, littered with connotations and grammatical nuance that only native speakers can instantly make sense of," writes Christine Maroti, AI Research Engineer at language firm Unbabel.</span> <span>So, the concerns surrounding TL;DR appear to be three-fold. Firstly that it might misunderstand texts and make simple mistakes, unintentionally propagating misinformation. Nick Inzucchi, a former Facebook employee who left the company earlier this month, has expressed concern in this regard. “AI will not save us,” he wrote. “The implicit vision guiding most of our integrity work today is one where all human discourse is overseen by perfect, fair, omniscient robots owned by Mark Zuckerberg. This is clearly a dystopia, but one so deeply ingrained we hardly notice it any more.”</span> <span>Secondly, that it does a disservice to its users by assuming that they wish to outsource their understanding to an algorithm. </span><span>Thirdly, revenues that might accrue to organisations who pay writers may end up diverted into Facebook's coffers, as people consume their summaries instead of the original work. Perhaps more fundamentally, it asks the profound question of whether writing is merely information. Many of us love to read precisely because of the detail it conveys and the emotions it provokes. TL;DR, even if it did its job perfectly, would be short on detail and lack emotional depth. And that, surely, has to be a shame.</span>