The <a href="https://www.thenationalnews.com/business/technology/2023/09/06/abu-dhabis-tii-launches-falcon-180b-model-to-boost-generative-ai-development/" target="_blank">Technology Innovation Institute</a>, the <a href="https://www.thenationalnews.com/business/future/2024/03/08/abu-dhabi-to-play-larger-role-in-space-communications-with-new-optical-ground-station/" target="_blank">Abu Dhabi </a>government-backed research centre, has unveiled the second iteration of its<a href="https://www.thenationalnews.com/business/technology/2022/03/03/abu-dhabis-technology-innovation-institute-opens-lab-to-boost-material-science-research/" target="_blank"> large language model</a>, Falcon 2, to compete with the likes of models developed by Meta, Google and OpenAI. Large language models are types of generative artificial intelligence that can imitate human intelligence. They can distinguish, review, translate, forecast and produce new content – text, audio or visual – using large data sets. TII, which is the applied research pillar of the UAE capital’s Advanced Technology Research Council (ATRC), said the new series is multilingual and has been tested against several prominent AI models. It comes with two versions: Falcon 2 11B that is trained on 5.5 trillion tokens with 11 billion parameters, and Falcon 2 11B VLM that offers vision-to-language model capabilities to convert visual inputs into textual outputs. “While Falcon 2 11B has demonstrated outstanding performance, we reaffirm our commitment to the open-source movement with it, and to the Falcon Foundation,” said Faisal Al Bannai, secretary general of ATRC and strategic research and advanced technology affairs adviser to the UAE President. “With other multimodal models soon coming to the market in various sizes, our aim is to ensure that developers and entities that value their privacy, have access to one of the best AI models to enable their AI journey.” Falcon 2 11B and 11B VLM are open-source models, offering unrestricted access to developers globally. They can tackle tasks in various languages including English, French, Spanish, German and Portuguese. Google and OpenAI, the two front-runners in the generative AI field, have predominantly maintained shut foundational models, expressing concern that large language models could be manipulated to spread misinformation or other potentially dangerous content. But proponents of open-source software say keeping these systems closed unfairly curtails innovation and hampers their potential to improve the world. The Falcon 2 11B outshines the performance of Facebook-parent Meta’s newly launched Llama 3 with 8 billion parameters, and performs on par with Google’s Gemma 7B, according to Hugging Face, a collaboration platform for the global machine learning community. Falcon 2 11B VLM, meanwhile, has the capability to identify and interpret images and visuals from the environment, providing a wide range of applications across industries such as health care, finance, e-commerce, education and the legal sector. These applications range from document management, digital archiving and context indexing to supporting those with visual impairments, TII said. The new models can run efficiently on one graphics processing unit, making them scalable and easy to deploy and integrate into lighter infrastructures like laptops, it added. TII said it aims to broaden the Falcon 2 next-generation models by introducing a range of new sizes. These models will be boosted with advanced machine learning capabilities like “mixture of experts”. This method involves merging smaller networks with distinct specialisations, ensuring that the most knowledgeable domains collaborate to deliver highly complex and customised responses.