cybersécurité

In AI arms race, ethics may be the first casualty

As the tech world embraces ChatGPT and other generative AI programs, the industry’s longstanding pledges to deploy AI responsibly could quickly be swamped by beat-the-competition pressures.

Why it matters: Once again, tech’s leaders are playing a game of « build fast and ask questions later » with a new technology that’s likely to spark profound changes in society.

  • Social media started two decades ago with a similar rush to market. First came the excitement — later, the damage and regrets.

Catch up quick: While machine learning and related AI techniques hatched in labs over the last decade, scholars and critics sounded alarms about potential harms the technology could promote, including misinformation, bias, hate speech and harassment, loss of privacy and fraud.

  • In response, companies made reassuring statements about their commitment to ethics reviews and bias screening.
  • High-profile missteps — like Microsoft Research’s 2016 « Tay » Twitterbot, which got easily prompted to repeat offensive and racist statements — made tech giants reluctant to push their most advanced AI pilots out into the world.

Yes, but: Smaller companies and startups have much less at risk, financially and reputationally.

  • That explains why it was OpenAI — a relatively small maverick entrant in the field — rather than Google or Meta that kicked off the current generative-AI frenzy with the release of ChatGPT late last year.
  • Both companies have announced multiple generative-AI research projects, and many observers believe they’ve developed tools internally that meet or exceed ChatGPT’s abilities — but have not unveiled them for fear of offense or liability.

ChatGPT « is nothing revolutionary, » and other companies have matched it, Meta chief AI scientist Yann LeCun said recently.

  • In September, Meta announced its Make-a-Video tool, which generates videos from text prompts. And in November, the company released a demo of a generative AI for scientific research called Galactica.
  • But Meta took Galactica down after three days of scorching criticism from scholars that it generated unreliable information.

What’s next: Whatever restraint giants like Google and Meta have shown to date could now erode as they seek to demonstrate that they haven’t fallen behind.

Read more

Veille-cyber

Recent Posts

Les 7 menaces cyber les plus fréquentes en entreprise

Introduction La cybersécurité est devenue une priorité stratégique pour toutes les entreprises, grandes ou petites.…

4 jours ago

Cybersécurité : Vers une montée en compétence des établissements de santé grâce aux exercices de crise

Cybersécurité : les établissements de santé renforcent leur défense grâce aux exercices de crise Face…

2 semaines ago

Règlement DORA : implications contractuelles pour les entités financières et les prestataires informatiques

La transformation numérique du secteur financier n'a pas que du bon : elle augmente aussi…

2 semaines ago

L’IA : opportunité ou menace ? Les DSI de la finance s’interrogent

L'IA : opportunité ou menace ? Les DSI de la finance s'interrogent Alors que l'intelligence…

2 semaines ago

Telegram menace de quitter la France : le chiffrement de bout en bout en ligne de mire

Telegram envisage de quitter la France : le chiffrement de bout en bout au cœur…

2 semaines ago

Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le secteur financier

Sécurité des identités : un pilier essentiel pour la conformité au règlement DORA dans le…

2 semaines ago

This website uses cookies.