A study has found that the artificial intelligence chatbot <a href="https://www.thenationalnews.com/business/technology/2023/03/15/gpt-4-launched-by-openai-what-you-need-to-know/" target="_blank">ChatGPT</a> can outperform doctors in providing high-quality and empathetic advice to patients' questions. <a href="https://www.thenationalnews.com/tags/research/" target="_blank">Researchers</a> at the University of California published the findings in <i>JAMA Internal Medicine.</i> They compared written responses from doctors and <a href="https://www.thenationalnews.com/business/technology/2023/01/10/microsoft-in-talks-to-invest-10-billion-in-chatgpt-owner-openai-reports-say/" target="_blank">ChatGPT</a> to real-world health questions. A panel of licensed healthcare professionals preferred ChatGPT’s responses 79 per cent of the time and rated its responses as higher quality and more empathetic. Dr John W Ayers from the Qualcomm Institute within the University of California San Diego, who led the study, said: “The opportunities for improving healthcare with AI are massive. “AI-augmented care is the future of medicine.” While the study shows the potential for AI assistants to be integrated into health systems to improve doctors' responses to patient questions, the researchers emphasised that AI assistants such as ChatGPT are not intended to replace doctors. Instead, they believe that doctors working together with technologies like ChatGPT may revolutionise medicine. To obtain a large and diverse sample of healthcare questions and doctors' answers that do not contain identifiable personal information, the team turned to the social media platform Reddit, where millions of patients publicly post medical questions, to which doctors respond. The subreddit r/AskDocs has about 452,000 members who post medical questions, with verified healthcare professionals submitting answers. While some may wonder if question-answer exchanges on social media are a fair test, the researchers noted that the exchanges were reflective of their clinical experience. The team randomly sampled 195 exchanges from AskDocs where a verified doctor responded to a public question. The team provided the original question to ChatGPT and asked it to author a response. A panel of three licensed healthcare professionals assessed each question and the corresponding responses and were blinded to whether the response originated from a doctor or ChatGPT. The panel of healthcare professional evaluators preferred ChatGPT responses to doctor responses 79 per cent of the time. ChatGPT responses were also rated significantly higher in quality than doctors' responses, and were more empathetic. Dr Aaron Goodman, an associate clinical professor at UC San Diego School of Medicine and study co-author, said: “ChatGPT is a prescription I’d like to give to my inbox. “The tool will transform the way I support my patients.” While the study shows promise for AI assistants in healthcare, the researchers emphasised the need for integrating AI assistants into healthcare messaging to be done in the context of a randomised controlled trial to judge how the use of AI assistants affects results for both doctors and patients.