cybersécurité

In AI arms race, ethics may be the first casualty

As the tech world embraces ChatGPT and other generative AI programs, the industry’s longstanding pledges to deploy AI responsibly could quickly be swamped by beat-the-competition pressures.

Why it matters: Once again, tech’s leaders are playing a game of « build fast and ask questions later » with a new technology that’s likely to spark profound changes in society.

  • Social media started two decades ago with a similar rush to market. First came the excitement — later, the damage and regrets.

Catch up quick: While machine learning and related AI techniques hatched in labs over the last decade, scholars and critics sounded alarms about potential harms the technology could promote, including misinformation, bias, hate speech and harassment, loss of privacy and fraud.

  • In response, companies made reassuring statements about their commitment to ethics reviews and bias screening.
  • High-profile missteps — like Microsoft Research’s 2016 « Tay » Twitterbot, which got easily prompted to repeat offensive and racist statements — made tech giants reluctant to push their most advanced AI pilots out into the world.

Yes, but: Smaller companies and startups have much less at risk, financially and reputationally.

  • That explains why it was OpenAI — a relatively small maverick entrant in the field — rather than Google or Meta that kicked off the current generative-AI frenzy with the release of ChatGPT late last year.
  • Both companies have announced multiple generative-AI research projects, and many observers believe they’ve developed tools internally that meet or exceed ChatGPT’s abilities — but have not unveiled them for fear of offense or liability.

ChatGPT « is nothing revolutionary, » and other companies have matched it, Meta chief AI scientist Yann LeCun said recently.

  • In September, Meta announced its Make-a-Video tool, which generates videos from text prompts. And in November, the company released a demo of a generative AI for scientific research called Galactica.
  • But Meta took Galactica down after three days of scorching criticism from scholars that it generated unreliable information.

What’s next: Whatever restraint giants like Google and Meta have shown to date could now erode as they seek to demonstrate that they haven’t fallen behind.

Read more

Veille-cyber

Recent Posts

Directive NIS 2 : Comprendre les obligations en cybersécurité pour les entreprises européennes

Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…

2 jours ago

NIS 2 : entre retard politique et pression cybersécuritaire, les entreprises dans le flou

Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…

3 jours ago

Quand l’IA devient l’alliée des hackers : le phishing entre dans une nouvelle ère

L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…

4 jours ago

APT36 frappe l’Inde : des cyberattaques furtives infiltrent chemins de fer et énergie

Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…

4 jours ago

Vulnérabilités des objets connectés : comment protéger efficacement son réseau en 2025

📡 Objets connectés : des alliés numériques aux risques bien réels Les objets connectés (IoT)…

7 jours ago

Cybersécurité : comment détecter, réagir et se protéger efficacement en 2025

Identifier les signes d'une cyberattaque La vigilance est essentielle pour repérer rapidement une intrusion. Certains…

7 jours ago

This website uses cookies.