In the domains of business, health care, and manufacturing, artificial intelligence (AI) is already making choices. However, AI systems still rely on humans to do checks and make final decisions. The automobile is approaching a traffic light when the brakes abruptly fail, forcing the computer to make a split-second choice. It can veer into a nearby post, killing the passenger, or continue and kill the person in front of it.
Although autonomous cars will make driving safer in general, accidents will undoubtedly occur, particularly shortly when these vehicles will be sharing the road with human drivers and other road users. Tesla currently does not produce fully autonomous vehicles, but it plans to do so in the future. In collision situations, Tesla cars do not automatically activate or disengage the Automatic Emergency Braking (AEB) system while a human driver is in charge.
To put it another way, even if the driver causes the accident, the driver’s actions are unaffected. Instead, if the automobile detects a potential collision, it sounds like an alarm to alert the driver. In “autopilot” mode, however, the car should automatically brake for pedestrians. You witness a runaway trolley approaching five employees on the rails who are tethered (or otherwise oblivious of the trolley). You’re standing next to a switch that’s controlled by a lever. The trolley will be rerouted onto a side track if you pull the lever, saving the five persons on the main track. On the other hand, there is a solitary individual on the sidetrack who is just as unaware as to the other workers.
Artificial intelligence is powering the fourth revolution, which is bringing cognitive capabilities to everything and is a game-changer. We’re leveraging AI to create self-driving cars, and automate processes, jobs, and even lives in certain circumstances. Addressing the problem of ethics is essential, considering the influence it will have on individuals as well as humanity’s future.
The first ethical quandary in AI concerns self-driving automobiles. The emergence of businesses attempting to produce completely self-driving automobiles has resurrected the trolley issue. After all, there’s more to AI ethics than programming a machine to make a certain choice. We must also consider the factors that lead to a certain outcome.
Directive NIS 2 : Comprendre les nouvelles obligations en cybersécurité pour les entreprises européennes La…
Alors que la directive européenne NIS 2 s’apprête à transformer en profondeur la gouvernance de…
L'intelligence artificielle (IA) révolutionne le paysage de la cybersécurité, mais pas toujours dans le bon…
Des chercheurs en cybersécurité ont détecté une intensification des activités du groupe APT36, affilié au…
📡 Objets connectés : des alliés numériques aux risques bien réels Les objets connectés (IoT)…
Identifier les signes d'une cyberattaque La vigilance est essentielle pour repérer rapidement une intrusion. Certains…
This website uses cookies.