Where AI will go wrong in 2022

Where AI will go wrong in 2022

Remember Skynet, the artificial intelligence that wanted to wipe out humanity in the Terminator movies? Now that is an example of AI gone wrong. Luckily, this will not be the case for us in 2022. AI today is by far not as advanced yet. But the movie does raise a couple of interesting questions. For example, how do we define ethics when it comes to developing and applying AI?

Here are some of the concerns where AI might go wrong and that I believe need more awareness in 2022:According to a UNESCO report, only 12% of the artificial intelligence researchers and 6% of the software developers are women. Women of colour are even less represented. The field is predominantly white, Asian and male. These white middle-class men simply cannot be aware of the needs of all of humanity. And the tech that they develop is inevitably biased towards white middle-class men. Because of the way machine-learning works, when you feed it biased data, it gets better and better—at being biased. This means that if there is any sexism or racism, based on conscious or unconscious bias, embedded within the data, the algorithm will pick up that pattern. We have already seen examples of self-driving (i.e. AI-driven) cars, disregarding certain ethnicities when deciding how to avoid collisions. Does this make a car racist? Well, not on purpose. The developers simply omitted to provide enough qualified training models for the AI in the car to learn from. This essentially created a bias that affected its decision-making process negatively.

Read more