Intelligence Artificielle

The AI safety debate is tearing Silicon Valley apart

The long-simmering fault lines within OpenAI over questions of safety with regard to the deployment of large language models like GPT, the engine behind OpenAI’s ChatGPT and DALL-E services, came to a head on Friday when the organization’s nonprofit board of directors voted to fire then-CEO Sam Altman. In a brief blog post, the board said that Altman had not been “consistently candid in his communications.” Now rumors are swirling about Altman’s next move—and possible return.

But OpenAI is not the only place in Silicon Valley where skirmishes over AI safety have exploded into all-out war. On Twitter, there are two camps: the safety-first technocrats, led by venture firms like General Catalyst in partnership with the White House; and the self-described “techno-optimists,” led by libertarian-leaning firms like Andreessen Horowitz.

The technocrats are making safety commitments and forming committees and establishing nonprofits. They recognize AI’s power and they believe that the best way to harness it is through cross-disciplinary collaboration.

Hemant Taneja, CEO and managing director of General Catalyst, announced on Tuesday that he had led more than 35 venture capital firms and 15 companies to sign a set of “Responsible AI” commitments authored by Responsible Innovation Labs, a nonprofit he cofounded. The group also published a 15-page Responsible AI Protocol, which Taneja described on X as a “practical how-to playbook.”

Taneja’s tweet was quickly ratioed. Praying for Exits, a Silicon Valley meme account and investor, posted a screenshot of messages between an AI researcher, named Rohan Pandey, and an investor at Insight Partners, which also signed the Responsible AI commitments, in which Pandey canceled their upcoming meeting; Pandey said the commitments would “endanger open-source AI research & contribute to regulatory capture.”

Source

Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.

Veille-cyber

Recent Posts

Bots et IA biaisées : menaces pour la cybersécurité

Bots et IA biaisées : une menace silencieuse pour la cybersécurité des entreprises Introduction Les…

1 semaine ago

Cloudflare en Panne

Cloudflare en Panne : Causes Officielles, Impacts et Risques pour les Entreprises  Le 5 décembre…

1 semaine ago

Alerte sur le Malware Brickstorm : Une Menace pour les Infrastructures Critiques

Introduction La cybersécurité est aujourd’hui une priorité mondiale. Récemment, la CISA (Cybersecurity and Infrastructure Security…

1 semaine ago

Cloud Computing : État de la menace et stratégies de protection

  La transformation numérique face aux nouvelles menaces Le cloud computing s’impose aujourd’hui comme un…

1 semaine ago

Attaque DDoS record : Cloudflare face au botnet Aisuru – Une analyse de l’évolution des cybermenaces

Les attaques par déni de service distribué (DDoS) continuent d'évoluer en sophistication et en ampleur,…

1 semaine ago

Poèmes Pirates : La Nouvelle Arme Contre Votre IA

Face à l'adoption croissante des technologies d'IA dans les PME, une nouvelle menace cybersécuritaire émerge…

1 semaine ago

This website uses cookies.