How to keep the lid on the Pandora’s box of open AI

ANSSI questionnaire crise cyber
ANSSI questionnaire crise cyber

Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found here.
https://www.ft.com/content/b62a32c9-a068-40c4-8612-9da8cffb396c

It is rapidly emerging as one of the most important technological, and increasingly ideological, divides of our times: should powerful generative artificial intelligence systems be open or closed? How that debate plays out will affect the productivity of our economies, the stability of our societies and the fortunes of some of the world’s richest companies. Supporters of open-source models, such as Meta’s LLaMA 2 or Hugging Face’s Bloom that enable users to customise powerful generative AI software themselves, say they broaden access to the technology, stimulate innovation and improve reliability by encouraging outside scrutiny. Far cheaper to develop and deploy, smaller open models also inject competition into a field dominated by big US companies such as Google, Microsoft and OpenAI. These companies have invested billions developing massive, closed generative AI systems, which they closely control. But detractors argue open models risk lifting the lid on a Pandora’s box of troubles. Bad actors can exploit them to disseminate personalised disinformation on a global scale, while terrorists might use them to manufacture cyber or bio weapons. “The danger of open source is that it enables more crazies to do crazy things,” Geoffrey Hinton, one of the pioneers of modern AI, has warned.