Last week our team, an international collaborative spanning four universities, published its latest research findings in the journal <i>Informatics</i>. Our project explored chatbots, those occasionally annoying computer programs that simulate conversation with human users. Led by Dr Mohammad Kuhail from Zayed University, we examined whether altering a chatbot's personality traits might make it more trustworthy, engaging and liked. Such research is increasingly important, as these eager electronic helpers begin to facilitate more and more of our routine online transactions. Also known as conversational agents, chatbots are everywhere, from "celebrity" helpers such as Siri and Alexa to the multitude of nameless bots that pop up online offering unsolicited assistance. Some of the earliest chatbots, however, pre-date the internet. Eliza, for example, was developed in 1966. This chatty piece of software was based on the communication style of a Rogerian psychotherapist. Eliza typically repeats back what the user said, while also posing an open-ended question. For example, tell Eliza that you hate her, and she will respond with something like: "I see. You hate me. Why do you think you feel that way?" As an undergraduate psychology student, I would introduce some of my fellow students to Eliza as though she were an online human. These conversations might go on for several minutes before my naive classmate realised they were pouring their soul out to a bot. Today's bots, though, are not trying to pass themselves off as humans. Generally, it's enough that they are polite and efficient and help us achieve our goals. According to a 2022 report by Gartner, a leading information technology advisory company, chatbots will become the primary customer service channel for roughly a quarter of all organisations in the US by 2027. The same report suggests that 54 per cent of us are already regularly interacting with chatbots, typically in commercial contexts, for example, online shopping or travel bookings. What are the psychological implications of our reliance on artificially intelligent chatbots? Does the fact that we are dealing with non-human agents make us less polite? Research presented at the Conference on Human Factors in Computing in 2019 suggests that more than 10 per cent of our interactions with chatbots are abusive. Some people feel disinhibited when talking to a chatbot, using a tone and language they would rarely use with a fellow human. In the future, such chatbot abuse might not be so lightly tolerated. An article published in 2016 in the <i>Harvard Business Review</i> warns that cursing out an underperforming chatbot may soon become a workplace disciplinary issue. The article suggests that, at the very least, it represents poor leadership-by-example. Consider also how the social development of a young child might be affected by regularly witnessing an adult verbally abuse a chatbot. One of my colleagues, a psychologist specialising in childhood development, expressed concerns that her young children were being rude and abrupt when speaking to chatbots. She now insists on "please and thank you" for bots too. Another subtle psychological impact bots can have on us is to reinforce existing social stereotypes. For example, how best to represent the bot using a human avatar? What ethnicity? What gender? In 2019, Unesco launched a policy paper titled "I'd Blush if I Could: Closing Gender Divides in Digital Skills through Education". The document warns that, as gendered chatbots become increasingly common, they have the power to entrench and reinforce existing gender-related stereotypes. The report draws attention to the fact that, by default, the English-language versions of Siri, Alexa and Cortana were all initially assigned female names and personas. More troublingly, even when faced with aggressive and abusive enquiries, these servile bots with female personas remained docile, agreeable and, occasionally, even flirtatious. About 91 per cent of those employed in Silicon Valley are male. On March 31, 2021, Apple updated its operating system and Siri no longer defaults to female. Regardless, how chatbots are represented remains a critical social issue. Furthermore, the metaverse (collective virtual/augmented reality) will introduce us to life-like, three-dimensional chatbots: slapable Siri. Our research is a small contribution to understanding how we can make chatbot interactions more effective, human and humane. In our most recent study, we tinkered with chatbot personality, dialling up and down the levels of extraversion. We also did the same for levels of agreeableness and conscientiousness (the tendency to be careful, diligent and dutiful). Our usage context was academic advising, helping undergraduate students navigate college life. Personality mattered. All the chatbots were equally helpful. However, the students trusted and liked (intended to use) those with higher levels of extraversion and agreeableness. As more of life is lived online, making the world a better place becomes synonymous with improving the internet, not speed and coverage, but content and culture. Interdisciplinary research is critical to this mission. This focus is more important than ever as we begin unlocking the potential of the metaverse.