Intelligence Artificielle

We need to create guardrails for AI

Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found here.
https://www.ft.com/content/3e27cfd6-e287-4b6f-a588-29b5b962a534

What if the only thing you could truly trust was something or someone close enough to physically touch? That may be the world into which AI is taking us. A group of Harvard academics and artificial intelligence experts has just launched a report aimed at putting ethical guardrails around the development of potentially dystopian technologies such Microsoft-backed OpenAI’s seemingly sentient chatbot, which debuted in a new and “improved” (depending on your point of view) version, GPT-4, last week. The group, which includes Glen Weyl, a Microsoft economist and researcher, Danielle Allen, a Harvard philosopher and director of the Safra Center for Ethics, and many other industry notables, is sounding alarm bells about “the plethora of experiments with decentralised social technologies”. These include the development of “highly persuasive machine-generated content (eg ChatGPT)” that threatens to disrupt the structure of our economy, politics and society. They believe we’ve reached a “constitutional moment” of change that requires an entirely new regulatory framework for such technologies. Some of the risks of AI, such as a Terminator-style future in which the machines decide humans have had their day, are well-trodden territory in science fiction — which, it should be noted, has had a pretty good record of predicting where science itself will go in the past 100 years or so. But there are others that are less well understood. If, for example, AI can now generate a perfectly undetectable fake ID, what good are the legal and governance frameworks that rely on such documents to allow us to drive, travel or pay taxes? One thing we already know is that AI could allow bad actors to pose as anyone, anywhere, anytime. “You have to assume that deception will become far cheaper and more prevalent in this new era,” says Weyl, who has published an online book with Taiwan’s digital minister, Audrey Tang. This lays out the risks that AI and other advanced information technologies pose to democracy, most notably that they put the problem of disinformation on steroids.

Veille-cyber

Share
Published by
Veille-cyber

Recent Posts

Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières

Le règlement DORA : un tournant majeur pour la cybersécurité des institutions financières Le 17…

12 heures ago

Cybersécurité des transports urbains : 123 incidents traités par l’ANSSI en cinq ans

L’Agence nationale de la sécurité des systèmes d'information (ANSSI) a publié un rapport sur les…

12 heures ago

Directive NIS 2 : Comprendre les obligations en cybersécurité pour les entreprises européennes

Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…

3 jours ago

NIS 2 : entre retard politique et pression cybersécuritaire, les entreprises dans le flou

Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…

4 jours ago

Quand l’IA devient l’alliée des hackers : le phishing entre dans une nouvelle ère

L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…

4 jours ago

APT36 frappe l’Inde : des cyberattaques furtives infiltrent chemins de fer et énergie

Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…

4 jours ago

This website uses cookies.