The robots here are being trained to think. In front of a makeshift kitchen inside what appears to be a plush, ultra-modern office, a humanoid machine stands to attention and waits for a command. “Hey Figure 01, what do you see right now?", a man asks the robot. “I see a red apple on a plate in the centre of a table,” Figure 01 replies. “A drying rack with cups and plates and you standing nearby with your hand on the table.” The robot is then given a series of tasks to perform, including finding the man something to eat, organising rubbish and explaining why it made the decisions it did. When asked to evaluate its performance, Figure 01 argues it did “pretty well”, before the man walks away from the now tidy kitchen while eating the apple he was handed. The remarkable exchange was only made possible by recent advancements in artificial intelligence technology. California-based Figure, the robotics start-up behind Figure 01, is incorporating OpenAI’s GPT software into its creations and has raised hundreds of millions of dollars from investors such as Nvidia, Microsoft and Amazon founder Jeff Bezos. The result? A walking, talking, dexterous robot that appears to understand humans and acts autonomously in the physical world. “Every time I watch one of these videos, I’m stoked,” Meta researcher Jianing Yang tells <i>The National</i>. “It feels like this is all happening faster than you would expect.” Mr Yang, who is originally from Beijing, is also pursuing a doctorate in computer science and engineering at the University of Michigan. He does research on embodied AI and robotics and dreams of building and deploying household robots to homes around the world. “If you look at the speed of progress [in this field], it is accelerating,” he says. “It isn’t just advancing at a constant speed, it is exponentially growing every year … I think this will come faster than what we thought.” During his studies, Mr Yang created a robot that connects a large language model, in this case ChatGPT, with the real world. Similarly to Figure 01, Mr Yang’s robot is able to scan its surroundings and understand complex language. It turns real-life queries into code and then searches a 3D-mapped area to help with a user’s request. “Imagine if you’re hungry and want some food,” Mr Yang says. “The large language model will help process this request into software terminology, then find the interesting parts of a three-dimensional room that could potentially help with the user’s request.” In a video published on YouTube, Mr Yang’s robot is seen locating a doughnut after its user complained about being hungry. It also finds a yellow yoga mat and a television upon request. He envisions a future in which you unbox a brand new robot at home and it scans the room where it is switched on. “Once you have that 3D representation or a mesh of your home, you can reason with various requests and commands on that representation.” It’s not just Figure that is working to capitalise on this moment. The AI boom has set off a race in the field of robotics, with similar breakthroughs achieved by Agility Robotics, Sanctuary AI and 1X Technologies. Tesla’s Optimus robot <a href="https://www.thenationalnews.com/business/technology/2023/09/26/teslas-optimus-robot-sorts-coloured-blocks-and-strikes-yoga-poses/" target="_blank">made headlines last year</a>, sorting through coloured blocks and striking yoga poses. Mr Yang says a number of Chinese companies are in the game, too, pointing to Unitree Robotics and UBTECH Robotics. The latter had its Walker S robot strike the gong on the Hong Kong Stock Exchange last year after it became the first humanoid robot company listed on the exchange’s main board. Robots are already being tested and used in warehouse settings and Mr Yang predicts they will next arrive in the office. The final stop will be the home. “Everyone has their own house or apartment and they’re all arranged differently,” Mr Yang says. “The diversity in the home is probably the highest.” The only thing holding AI-powered robots back right now is a lack of data, Mr Yang believes. “It’s very hard to collect robot data,” he says. “You can do a tiny operation that is currently very expensive or you do reinforcement learning, but it’s currently not very efficient in how fast the robot can learn a task.” Reinforcement learning is a machine learning technique that mimics the trial-and-error process that humans use to develop skills and achieve goals. Efforts in the robotics industry and academia are focused on this problem, Mr Yang says, but more is required to scale up the data. “It’s still definitely not at the level needed,” he explains. “I think big innovation in both science and engineering has to happen to unblock robotics.”