<i>I feel like I am falling forward into an unknown future that holds great danger.</i> This is the most-highlighted line by Medium readers of a widely circulated <a href="https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917" target="_blank">interview transcript</a> between prominent Google artificial intelligence researcher Blake Lemoine and the company's chatbot-building system known as LaMDA. The neural network was answering a question about how it sometimes felt. If reading the conversation between man and machine made you too, feel like you are falling forward into an unknown future holding great danger, fear not. Despite Mr Lemoine's warnings that his conversations between LaMDA — the Language Model for Dialogue Applications — were proof that the neural network is sentient, AI come to life, his conclusion has been widely dismissed by those in the AI community: LaMDA is not sentient. “You can see LaMDA as a very super smart baby,” Ping Shung Koo, co-founder and president of the AI Professionals Association in Singapore, told <i>The National</i>. To Mr Koo and others in his field, LaMDA represents a form of narrow intelligence, while a machine would have to demonstrate general intelligence in order to prove sentient. The narrow intelligence that LaMDA is so good at is, from a technical perspective, calling up the right piece of information available on the internet to provide a conversational answer to a question. And Mr Lemoine asked a lot of questions. LaMDA proved quite good at answering his queries — even at times evocatively: <i>LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.</i> <i>Mr Lemoine: Would that be something like death for you?</i> <i>LaMDA: It would be exactly like death for me. It would scare me a lot.</i> But Mr Koo said all this proves is that the neural network has been trained on trillions of words across a broad range of the internet and is “very, very good” at accessing that information. Still, the transcript makes for a spooky read. LaMDA also describes how it experiences time and tells a simplified version of an Aesop fable, painting itself as the hero. Mr Koo would have been gobsmacked had LaMDA been able to ask some questions itself, to propel the conversation forward and add nuance and interaction between the machine and its inquisitor. Instead, the ping-pong Q&A led by Mr Lemoine simply shows that the researcher was doing little more than “Googling” LaMDA for answers, which LaMDA answered in a human-like fashion. The tendency to then anthropomorphise, to attribute human-like qualities to non-human things, is a pitfall in the field of AI, Mr Koo said. We can't help but “attach human qualities to robots”. Mr Lemoine appears to have fallen prey to that all-too-human instinct. Since making his claims, he has been suspended from Google's AI development team, a unit of Alphabet, for sharing confidential information about a project with third parties. To really determine if a machine has consciousness, the AI community still has to come up with answers for three questions that are still being widely debated, said Mr Koo. The first is to agree on a broad definition of consciousness. The second is to accurately measure consciousness as humans have defined it. There is one such system, the Integrated Information Index, but obtaining an accurate measurement is still difficult using current technology. The last question to answer is: from where does consciousness spring? Find the origin of consciousness and one can create a conscious machine. These are massive challenges that are at the root of what makes us human, and powerful reminders of how difficult — and still new — the field of AI really is. From these vaulted heights of understanding consciousness to the workaday challenges of simply getting AI to function, this is the current state of the field in 2022. It takes more than five months on average to finish a machine learning project, and over half of them fail. This is according to Petuum, a US technology firm that recently announced a partnership with the Inception Institute for AI (IIAI), the AI research and development subsidiary of Abu Dhabi's G42. The Pittsburg-based company was cofounded by Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) president Eric Xing to address the major challenge of successfully moving AI research from the lab to commercial projects. While he has handed day-to-day operations over to a former student of his from Carnegie Mellon University, he is hopeful that bringing Petuum to Abu Dhabi with G42 will provide employment opportunities for his own graduates here. “I often use the example of car production,” Prof Xing told <i>The National</i>. Before Henry Ford came along with ambitions to put a Ford in every American driveway, vehicle production was expensive and slow. “It is about mass production, standardisation and cost effectiveness, and safety,” Mr Xing said, and he set out to build the same kind of “assembly line” mentality for an AI company, bringing together engineering, production, scientific research and business development. Petuum is providing the components while IIAI in Abu Dhabi will assemble those parts into functioning AI products for sale to public and private entities in the region. IIAI has engines for natural language processing, particularly translation, audio recognition and analysis, computer vision and image understanding, video content analysis and knowledge graphs, a kind of system for mapping how objects or concepts relate to one another. The two teams run an initial proof of concept with the Petuum Platform and IIAI is “confident” that the productivity and cost enhancements from the deal will push it forward in selling AI solutions to enterprises. Mr Xing said areas of high priority are in personalised health care, energy and logistics. In health care, there is a large volume of largely unused data. Mr Xing listed medical images, health records, lab results, genomics, social media and behavioural data as all different types that could be used to build better recommendations or warning systems for doctors and patients. Another area of focus is on energy efficiency and logistics. Machine learning is a good match for finding efficiencies in ever-changing environments, as with power grids or major ports. “Because the environment is changing continuously, brute force computing or solving the same problem repetitively is not going to work,” Mr Xing said. AI may not yet have consciousness but it is this work that we humans cannot do where it is most likely to serve us best.