The Gradient Institute, with support from Minderoo Foundation, recently released a report on the growing risk of Artificial Intelligence (AI) to business along with open source software for companies to combat the risk.
Here, Bill Simpson-Young who is CEO of Gradient Institute and former CSIRO, talks about the dangers of Artificial Intelligence for business.
What are the dangers of Artificial Intelligence we need to be aware of for business?
AI used by businesses can have many benefits to businesses and their customers such as being able to perform actions for the customer at great speed and customised specifically for that customer. For example, every time you use a map app on your phone, the speed, accuracy, and relevance for you is made possible by AI. AI is also used in deciding your news feed, whether you get matched for a job opening and whether you are successful with a loan application.
Unfortunately, as we show in this new report, there is now overwhelming evidence that the use of AI for automated decision-making also has the potential to produce unlawful, immoral, discriminatory outcomes for individuals through what are usually opaque and unaccountable decision processes.
These harms are arising from unwarranted trust in (or at least reliance on) AI. Humans and machines make decisions differently. While humans have common sense and are able to navigate different contexts with ease, machines have no built-in moral judgement and only perform well in relatively narrow domains.
What do we need to be aware to safeguard our business?
Companies which operate AI systems that can influence the lives of people need to get better at understanding the new risks that AI systems present for their business. The report includes a taxonomy of these new risks which includes “failures of legitimacy” (for example when the way an AI system works leads to inadvertently treating different types of people differently and going against anti-discrimination law), “failures of design” (for example when an AI system has been trained using data that really isn’t suitable for the decisions the AI will be making) and “failures of execution” (for example failing to properly monitor the operation of the AI system over time).
How do we combat it?
The report is a pragmatic one – we describe a range of actions that companies can take to help them use AI responsibly. These are mostly existing approaches that are available to help AI be used responsibly and we have brought these together in a way that makes it easier for companies to adopt them. We have also released some software that we hope companies who use AI will use to provide better control and oversight of their AI systems. We call it AI Impact Control Panel software and have made it available as open-source software so companies can adopt it and adapt it freely and easily.
Mots-clés : cybersécurité, sécurité informatique, protection des données, menaces cybernétiques, veille cyber, analyse de vulnérabilités, sécurité des réseaux, cyberattaques, conformité RGPD, NIS2, DORA, PCIDSS, DEVSECOPS, eSANTE, intelligence artificielle, IA en cybersécurité, apprentissage automatique, deep learning, algorithmes de sécurité, détection des anomalies, systèmes intelligents, automatisation de la sécurité, IA pour la prévention des cyberattaques.
Bots et IA biaisées : une menace silencieuse pour la cybersécurité des entreprises Introduction Les…
Cloudflare en Panne : Causes Officielles, Impacts et Risques pour les Entreprises Le 5 décembre…
Introduction La cybersécurité est aujourd’hui une priorité mondiale. Récemment, la CISA (Cybersecurity and Infrastructure Security…
La transformation numérique face aux nouvelles menaces Le cloud computing s’impose aujourd’hui comme un…
Les attaques par déni de service distribué (DDoS) continuent d'évoluer en sophistication et en ampleur,…
Face à l'adoption croissante des technologies d'IA dans les PME, une nouvelle menace cybersécuritaire émerge…
This website uses cookies.