The US used <a href="https://www.thenationalnews.com/world/uk-news/2023/12/16/ai-deployed-for-first-time-in-major-gaza-battlefield-role/" target="_blank">artificial intelligence</a> to identify targets hit by air strikes in the Middle East this month, a defence official told Bloomberg News, revealing a growing military use of the developing technology for combat. Machine-learning algorithms that can teach themselves to identify objects helped to narrow down <a href="https://www.thenationalnews.com/world/us-news/2024/02/02/us-launches-retaliatory-strikes-on-iraq-and-syria/" target="_blank">targets for more than 85 US air strikes on February 2</a>, said Schuyler Moore, chief technology officer for US Central Command (Centcom), which runs military operations in the Middle East. The Pentagon said those strikes were conducted by bombers and fighter aircraft on seven facilities in Iraq and Syria in <a href="https://www.thenationalnews.com/mena/jordan/2024/01/29/drone-attack-jordan-us/" target="_blank">retaliation for a deadly strike on US personnel at a Jordan base</a>. “We’ve been using computer vision to identify where there might be threats,” Ms Moore told Bloomberg News. “We’ve certainly had more opportunities to target in the last 60 to 90 days.” She said the US was currently looking for “an awful lot” of rocket launchers from hostile forces in the region. The military has previously acknowledged using computer-vision algorithms for intelligence purposes – but Ms Moore’s comments mark the strongest known confirmation of the US military using the technology to identify enemy targets that were subsequently hit. The US strikes, which the Pentagon said destroyed or damaged rockets, missiles, drone storage and militia operations centres, among other targets, were part of President Joe Biden's response to the <a href="https://www.thenationalnews.com/world/us-news/2024/02/02/biden-honours-us-soldiers-killed-in-jordan/" target="_blank">killing of three service members</a> in an attack on January 28 at a military base in Jordan. The US <a href="https://www.thenationalnews.com/mena/iraq/2024/02/01/kataib-hezbollah-islamic-resitance/" target="_blank">attributed the attack to Iranian-backed militias</a>. Ms Moore said AI systems have also helped identify rocket launchers in Yemen and surface vessels in the Red Sea, several of which Centcom said it has destroyed in a number of weapons strikes this month. Iran-supported Houthi militias in Yemen have repeatedly attacked commercial ships in the Red Sea with rockets. The targeting algorithms were developed under Project Maven, a Pentagon initiative started in 2017 to accelerate the adoption of AI and machine learning throughout the Defence Department and to support defence intelligence, with emphasis in prototypes at the time on the US fight against ISIS militants. Ms Moore said US forces in the Middle East have experimented with computer-vision algorithms that can locate and identify targets from imagery captured by satellite and other data sources, trying them out in exercises over the past year. “October 7 everything changed,” Ms Moore said, referring to the Hamas attack on Israel that preceded the war in Gaza. “We immediately shifted into high gear and a much higher operational tempo than we had previously.” US forces were able to make “a pretty seamless shift” into using Maven after a year of digital exercises, she added. Ms Moore emphasised Maven’s AI capabilities were being used to help find potential targets but not to verify them or deploy weapons. She said exercises late last year, in which Centcom experimented with an AI recommendation engine, showed such systems “frequently fell short” of humans in proposing the order of attack or the best weapon to use. Humans constantly check the AI targeting recommendations, she said. US operators take seriously their responsibilities and the risk that AI could make mistakes, she said, and “it tends to be pretty obvious when something is off”. “There is never an algorithm that’s just running, coming to a conclusion and then pushing on to the next step,” she said. “Every step that involves AI has a human checking in at the end.”