Home / Cybersecurity / RESEARCHERS ASKED AN ADVANCED AI WHETHER AI COULD EVER BE ETHICAL, AND IT SAID NO

RESEARCHERS ASKED AN ADVANCED AI WHETHER AI COULD EVER BE ETHICAL, AND IT SAID NO

In AI development, equality starts with a mandate for fairness and inclusivity

Is artificial intelligence inherently good, inherently bad, or does it all depends on the specifics?

Students at Oxford’s Said Business School who are studying ethics in AI attempted to answer that question by hosting a debate with an actual AI.

An essay by a pair of Oxford scholars in the Conversation describers an eyebrow-raising anecdote in which the researchers hosted a debate about the ethics of automated AI stock trading and facial recognition software — and allowed an AI to participate.

“AI will never be ethical,” the AI said during the debate. “It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans.”

Read more

Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.