meta
Possible reasons for release of LLM include potential for diluting rivals’ competitive edge
Mark Zuckerberg’s Meta has this week released an open-source version of an artificial intelligence model, Llama 2, for public use. The large language model (LLM), which can be used to create a ChatGPT-like chatbot, is available to startups, established businesses and lone operators. But why is Meta doing this and what are the potential risks involved?
LLMs underpin AI tools such as chatbots. They are trained on vast datasets that enable them to mimic human language and even computer coding. If an LLM is made open-source that means its content is made freely available for people to access, use and tweak to their own purpose.
Llama 2 is being released in three versions, including one that can be built into an AI chatbot. The idea is that startups or established businesses can access Llama 2 models and tinker with them to create their own products including, potentially, rivals to ChatGPT or Google’s Bard chatbot – although by Meta’s own admission Llama 2 is not quite at the level of GPT-4, the LLM behind OpenAI’s ChatGPT.
Nick Clegg, Meta’s president of global affairs, told BBC Radio 4’s Today programme on Wednesday that making LLMs open-source would make them “safer and better” by inviting outside scrutiny.
Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…
L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
This website uses cookies.