<a href="https://www.thenationalnews.com/tags/ai/" target="_blank">Artificial intelligence</a> technologies are not dangerous as they are not capable of generating behaviour or thoughts, Al Rizzi, chief technology officer at the Boston Dynamics AI Institute, told <i>The National</i> at the International Conference on Robotics and Automation in London on Wednesday. Large <a href="https://www.thenationalnews.com/tags/language" target="_blank">language</a> models such as <a href="https://www.thenationalnews.com/arts-culture/2023/05/28/how-chatgpt-has-the-potential-to-change-our-view-of-the-arab-world/" target="_blank">ChatGPT</a> that understand language well enough to produce sentences lead “people to believe that they're actually intelligent and know what they're talking about”, Mr Rizzi said. “My very pessimistic interpretation is no, they don't – all they do is they know how to generate words that go together.” In addition to working on the <a href="https://www.thenationalnews.com/tags/technology" target="_blank">technology</a>, creators also have to consider ethical issues, societal adoption questions, policy issues and what the world should we do with these technologies, he added. They also need to be aware of how to take advantage of new technologies in a way that helps society rather than harms it, Mr Rizzi said. He said his goal is to make robots more accessible, easier to interact with and “more trustable, more reliable and more capable". “In the long term, our expectation is that robots should be tools that are essentially ubiquitous in society – things that should be enabling us to do more in our day to day lives and serve as helpers for us,” he said. Mr Rizzi is interested in helping realise that by making the machines more capable and more intelligent – but more intelligent does not mean they have emotions or behave like people. It means, he said, that they are able to understand instructions, what they are being asked to do and how they interact with the physical world so they can do useful things for humans. “We want robots to be tools that help everybody in society, deal with the physical world around them,” he said. “Whether that's an assistant for me, or when I'm a little bit older, a personal assistant robot who helps me with healthcare issues or keeping my home orderly. “Those are really interesting capabilities.” Meanwhile,<b> </b>the <a href="https://www.thenationalnews.com/tags/eu" target="_blank">EU</a> and the <a href="https://www.thenationalnews.com/tags/us/" target="_blank">US</a> said on Wednesday that they would soon release a voluntary code of conduct on AI, hoping to develop common standards among democracies as <a href="https://www.thenationalnews.com/tags/china/" target="_blank">China</a> makes rapid gains. Both political and technology industry leaders have been warning of the growing risks as AI takes off, with potentially wide-ranging effects on privacy and other civil liberties. After talks with EU officials in Sweden, <a href="https://www.thenationalnews.com/tags/antony-blinken" target="_blank">US Secretary of State Antony Blinken</a> told reporters that western partners felt the “fierce urgency” to act and would ask “like-minded countries” to join the voluntary code of conduct. “There's almost always a gap when new technologies emerge,” Mr Blinken said, with “the time it takes for governments and institutions to figure out how to legislate or regulate”. European Commission Vice President Margrethe Vestager added that a draft would be put forward “within weeks”. “We think it's really important that citizens can see that democracies can deliver,” she said. She voiced hope “to do that in the broadest possible circle – with our friends in Canada, in the UK, in Japan, in India, bringing as many on board as possible”. Sam Altman, whose firm OpenAI created the popular ChatGPT bot, took part in the talks of the Trade and Technology Council between the EU and the US, hosted this year in the northern Swedish city of Lulea. The forum was set up in 2021 to try to ease trade frictions after the turbulent US presidency of Donald Trump but has since set its sights largely on AI. In a joint statement released by the White House and the European Commission, the two sides called AI a “transformative technology with great promise for our people, offering opportunities to increase prosperity and equity”. “But in order to seize the opportunities it presents, we must mitigate its risks,” it said. It added that experts from the two sides would work on “co-operation on AI standards and tools for trustworthy AI and risk management”. They also discussed how to work together on sixth-generation mobile technology, an area in which Europeans have taken an early lead. The EU has been moving forward on the world's first regulations on AI, which would ban biometric surveillance and ensure human control of the technologies, though the rules would not enter into force before 2025 at the earliest. China has also discussed regulations but western powers fear that Beijing, with its growing prowess in the field and willingness to export to fellow authoritarian countries, could effectively set global standards. While concerns have risen about China in the EU, the bloc as a whole has yet to take as assertive a stance as the US has, with French President Emmanuel Macron recently leading a major business delegation to the world's second-largest economy. But Mr Blinken played down differences between the US and European positions on China, saying that “none of us are looking for a Cold War”. “On the contrary, we all benefit from trade and investment with China, but as opposed to decoupling, we are focused on de-risking,” he said. The US has made no serious effort to rein in AI despite rising calls for regulation, including by some in the tech industry. Technology leaders, including Mr Altman, warned in a joint statement on Tuesday that AI could put the world at risk without regulation. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” they wrote. ChatGPT burst into the spotlight late last year as it demonstrated an ability to generate essays, poems and conversations with minimal input. Hoping to demonstrate both the strengths and risks of AI, Danish Prime Minister Mette Frederiksen on Wednesday delivered a speech to parliament partly written by ChatGPT. “Even if it didn't always hit the nail on the head, both in terms of the details of the government's work programme and punctuation … it is both fascinating and terrifying what it is capable of,” she said. The Computer and Communications Industry Association, which represents major technology firms, in a statement welcomed the “heightened, pointed transatlantic engagement” on AI at the meeting in Sweden. But it reiterated its opposition to any EU fees or actions against foreign tech companies.