The <a href="https://www.thenationalnews.com/tags/uk/" target="_blank">UK's</a> national standards body has published international guidelines on how to safely manage <a href="https://www.thenationalnews.com/tags/artificial-intelligence" target="_blank">artificial intelligence (AI)</a>. The guidance outlines how to establish, implement, maintain and continually improve an AI management system, with a focus on safeguards. The British Standards Institution (BSI) published the AI management system and offers direction on how businesses can responsibly develop and use AI tools internally and externally. It comes amid continuing debate about the need to regulate the fast-moving <a href="https://www.thenationalnews.com/tags/technology" target="_blank">technology</a>, which has become increasingly prominent over the past year thanks to the public release of generative AI tools such as <a href="https://www.thenationalnews.com/business/technology/2023/10/18/chatgpt-maker-openai-teams-up-with-abu-dhabis-g42-in-middle-east-expansion-push/" target="_blank">ChatGPT</a>. The UK held the first global <a href="https://www.thenationalnews.com/business/technology/2023/10/31/ai-uk-how-rishi-sunaks-safety-summit-is-powering-technology/" target="_blank">AI Safety Summit</a> in November, where world leaders and major tech firms from around the world met to discuss the safe and responsible development of AI, as well as the potential long-term threats the technology could pose. Those threats included AI being used to create malware for <a href="https://www.thenationalnews.com/tags/cyber-crime/" target="_blank">cyber attacks</a> and even being a potentially existential threat to humanity, if humans were to lose control of the technology. Susan Taylor Martin, chief executive of BSI, said of the new international standard: “AI is a transformational technology. For it to be a powerful force for good, trust is critical. “The publication of the first international AI management system standard is an important step in empowering organisations to responsibly manage the technology which, in turn, offers the opportunity to harness AI to accelerate progress towards a better future and a sustainable world. “BSI is proud to be at the forefront of ensuring AI’s safe and trusted integration across society.” The guidance includes requirements to create context-based risk assessments, as well as additional controls for internal and external AI products and services. “AI technologies are being widely used by organisations in the UK despite the lack of an established regulatory framework," said Scott Steedman, director general for standards at BSI. “While government considers how to regulate most effectively, people everywhere are calling for guidelines to protect them. “In this fast moving space, BSI is pleased to announce publication of the latest international management standard for industry on the use of AI technologies, which is aimed at helping companies embed safe and responsible use of AI in their products and services. “Medical diagnoses, self-driving cars and digital assistants are just a few examples of products that already benefit from AI. “Consumers and industry need to be confident that in the race to develop these new technologies we are not embedding discrimination, safety blind spots or loss of privacy. “The guidelines for business leaders in the new AI standard aim to balance innovation with best practice by focusing on the key risks, accountabilities and safeguards.”