What if I told you these words were not written by a human? What if I said they are the output of an artificially intelligent (AI) conversational agent, a chatbot? This is not the case, this time. These are my words. However, we don't know what clever artificial creature will write/co-write our articles, essays, tweets and poems in the coming months and years. Launched in November 2022, ChatGPT (Generative Pre-Trained Transformer) has caused significant concerns among academics and headteachers. This AI-assisted text generator, developed by San Francisco-based OpenAI, produces polished natural language content (in English) in response to any questions its interrogator might choose to pose. Think Siri, but with a doctoral degree in everything. The output is variable, and ChatGPT occasionally gets things wrong. And, to my taste, its poetry is terrible. However, ChatGPT is based on machine learning and will get slicker, sharper and more accurate with time. I suspect the poetry will remain substandard, though, unless the machine can learn to fall in love, experience heartbreak and disregard the rules. My son, currently a graduate student, introduced me to the ChatGPT, claiming that it could write passable essays. I had seen earlier AI applications making similar claims and had always been comforted by their failure to deliver. Much of the earlier software I had tried produced barely comprehensible sentences. ChatGPT, however, is a different story. After test-driving it, my immediate thought was: game over. I asked ChatGPT the type of essay question I might assign to first-year psychology students: compare and contrast the ideas of Sigmund Freud and Carl Jung. Although the output fell short of a complete essay, ChaptGPT gave me five paragraphs of well-articulated prose, making valid points. This was easily obtained content that a dishonest student could pass off as their own. Furthermore, it would be undetectable by today’s plagiarism detection software. To get a flavour of ChatGPTs output, this is the opening paragraph the bot produced in response to the Freud/Jung question. <i>Sigmund Freud and Carl Jung were both influential figures in the field of psychology, and they are perhaps most well-known for their contributions to the development of psychoanalysis. However, they had some significant differences in their ideas and approaches to psychology …</i> Understandably, there are huge concerns that this new technology can be used for plagiarism or “AI-giarism” (passing off AI-written content as one’s original work). The rise of internet-fuelled plagiarism has already rocked academia. For example, last year, the UK government passed the <a href="https://www.thenationalnews.com/opinion/comment/2022/11/02/academic-misconduct-and-cheating-is-not-new-so-whats-changed/">Skills and Post-16 Education Bill</a>, making it a criminal offence to offer cheating services, such as writing bespoke essays for students. Similar laws exist internationally, with New Zealand, Ireland and the US passing such legislation. The emergence of these laws is a response to a rising international tide of academic dishonesty and cut-paste piracy. In response to the AI-giarism backlash from academics and headteachers, OpenAI is working to counter such academic dishonesty by making ChatGPT output identifiable through a digital watermark or fingerprint that will identify the text as being machine-generated. The solution sounds far from bulletproof, and a tech-savvy student intent on an easy ride could reword the ChatGPT output. Attempting to block or change AI tools is a losing battle. It is time that we fundamentally rethink assessment in higher education. For example, bring back oral exams for undergraduates, take a mastery-based (pass/fail) approach, or leave all the high-stakes assessments to would-be employers. If I were a business owner, I would only hire graduates after performing rigorous, in-house assessments of their capabilities, regardless of their high GPAs and glowing university transcripts. ChatGPT is just another indicator that higher education is over ripe for radical transformation. I have a colleague with research interests in AI. He predicts we will need fewer lecturers/instructors in the coming decades. Why should people stand at the front of a classroom/lecture hall sharing opinions and information when we can automate this task? But what if things need to be clarified and students have questions? This is where applications like ChatGPT and its cleverer successors will get involved. Of course, we will still need research-focused academics, those who ask novel questions and create new knowledge, but those who share existing knowledge will be in far lesser demand. I spoke to another colleague who had also recently discovered ChatGPT. He told me that he was having a bout of insomnia, feeling lonely and wanted to talk to someone. Although he did not succumb to the temptation (this time), he seriously considered having a round of Q and A – intelligent conversation – with his new artificial companion, ChatGPT. We need to think hard about how we use such technologies. Ideally, we need a relationship with technology that truly benefits our long-term development and well-being. Where there is potential harm to the individual or society, we must identify ways to mitigate or minimise the ill effects as soon as possible. In 1889, we invented the automobile but it was not until 1959 that we developed the seatbelt. In 1913, we first put cigarettes in packs of 20 but it was not until 1965 that we added a health warning to the box. Recently, we have rapidly embraced all kinds of digital technologies. So let’s identify any harmful affects and mitigate them sooner rather than decades later.