Thank you for sharing this story! However, please do so in a way that respects the copyright of this text. If you want to share or reproduce this full text, please ask permission from Innovation Origins (email@example.com) or become a partner of ours! You are of course free to quote this story with source citation. Would you like to share this article in another way? Then use this link to the article: https://innovationorigins.com/en/who-is-liable-for-my-racist-robot/
Manufacturers of products that make use of artificial intelligence are liable for any eventual damage at all times. In an effort to provide users’ rights with better protection, the European Commission is tightening the AI Liability Directive.
This summer, the new Meta chatbot became the target of scorn. Just days after Blenderbot 3 of Facebook’s parent company launched online in the United States, the self-learning program had degenerated into a racist spreader of fake news.
The same thing happened in 2016 with the Tay chatbot developed by Microsoft which was designed to engage in conversations with real people on Twitter. Tay also made a wrong turn and was soon taken offline by Microsoft.
Real damage to real people
The scandals surrounding programs like Tay and Blenderbot are laughable and may seem relatively harmless. At most, their tales are a painful lesson in how a robot is prone to right-wing extremism when instructed to interact with real people online.
Yet self-learning computer systems are definitely capable of doing actual damage to real people. And this doesn’t just concern self-driving cars that misjudge situations and cause a collision.
What is also a matter of concern is when serious software programs that make use of AI techniques exhibit unexpected racist behavior. These are programs used, for example, in surveillance cameras or in the analysis of job application letters.
The general public should be able to trust robots
Whether it concerns autonomous transport, automation of complex processes, or the more efficient use of agricultural land, the European Union expects a great deal from the technological innovations that are being made possible thanks to artificial intelligence. But AI applications can only truly succeed if the general public does not lose confidence in the technology. That is why the European Commission has already come up with the Artificial Intelligence Actlast year. The new Liability Directive is a follow-up to that.
The law governs the conditions under which artificial intelligence is allowed to be utilized. For example, it prohibits the marketing of ‘smart’ products that threaten the safety, livelihood, or rights of human beings. Examples include toys that encourage children to engage in dangerous behavior or AI systems that enable governments to closely monitor citizens.