<a href="https://www.thenationalnews.com/future/technology/2024/09/10/iphone-16-launch-key-takeaways-will-apple-intelligence-prove-to-be-a-hit/" target="_blank">Generative artificial intelligence</a> is significantly affecting business spending on security, but misjudged investments may create a false sense of security that makes companies more vulnerable to threats, a senior executive of Palo Alto Networks has said. <a href="https://www.thenationalnews.com/business/banking/2024/09/15/cyber-security-the-new-age-risk-bankers-are-struggling-to-mitigate/" target="_blank">While it is important for organisations to invest in protection</a>, there is not always a direct correlation between the size of these investments and the level of protection gained, Ercan Aydin, regional vice president for the Middle East and Africa, told <i>The National </i>on the sidelines of the Gitex Global technology conference. "There are various reasons for this, including organisations making investments in a large number of overlapping and often incompatible technologies. In this scenario, an organisation would typically face higher costs just to manage its complex security set-up," he said in an interview. "Yet, the result would be patchy security, leaving them at risk of a serious security breach. A holistic, integrated approach offers greater security and cost efficiency." Mr Aydin's warning follows a new study from the California-based cyber security company which showed that 97.6 per cent of chief executives and top-level managers in the UAE plan to increase their spending in artificial intelligence to safeguard their systems. In the study of 250 respondents last month, they acknowledged a significant increase in cyber threats, with about 63 per cent reporting a higher number of attacks compared to 2023, and 14 per cent saying they experienced the same level of breaches compared to last year. On the other hand, enterprises in the Emirates appear confident that they can improve their cyber security with AI, with 94 per cent saying they trust the effectiveness of AI tools in detecting or mitigating threats compared to traditional means, the report said. In addition, concerns over the implications of using AI were highlighted. Half of respondents said they were worried about privacy issues, while 48 per cent were concerned about <a href="https://www.thenationalnews.com/future/technology/2024/10/11/uaes-new-ai-foreign-policy-aims-to-prevent-misuse-of-technology/" target="_blank">the potential misuse of AI</a> and 45 per cent were wary about costs. "AI is a broad term, encompassing a range of technologies including generative AI, machine learning and natural language processing, each of which can be used for good or bad in various ways," Mr Aydin said. "In this light, it is not surprising that business leaders in the UAE have concerns about the potential implications of AI, because, in fact, everyone is still wrestling with this issue." AI has long been used by enterprises for activities including cyber security. The birth of generative AI, made popular by OpenAI's ChatGPT last year, accelerated the technology's use. However, with its rise came new threats. In a 2024 blog, Palo Alto identified six common, yet potentially dangerous, AI-induced cyber attacks: conducting reconnaissance, enhanced social engineering, malicious code development, automated vulnerability exploitation, deepfake attacks and prompt injection. The first two are acts of gathering information on intended targets and tricking users with highly-polished phishing e-mails. All cases, however, share a common denominator: AI, generative AI in particular, can be used to prevent them, inasmuch as it can cause harm. "The technology is being widely exploited by cyber criminals, leading to a surge in complex automated attacks. At the same time, cyber security specialists are also using AI to transform and improve security with the aim to always be one step ahead," Mr Aydin said. Palo Alto's study showed that 90 per cent of senior executives in the UAE had already engaged AI for cyber security, with more than half saying they had made extensive implementations and 38 per cent having made limited deployments.