AI Isn’t Ready to Make Unsupervised Decisions

ai decisions
ai decisions

AI has progressed to compete with the best of the human brain in many areas, often with stunning accuracy, quality, and speed. But can AI introduce the more subjective experiences, feelings, and empathy that makes our world a better place to live and work, without cold, calculating judgment? Hopefully, but that remains to be seen. The bottom line is, AI is based on algorithms that responds to models and data, and often misses the big picture and most times can’t analyze the decision with reasoning behind itIt isn’t ready to assume human qualities that emphasize empathy, ethics, and morality.

 

Artificial intelligence is designed to assist with decision-making when the data, parameters, and variables involved are beyond human comprehension. For the most part, AI systems make the right decisions given the constraints. However, AI notoriously fails in capturing or responding to intangible human factors that go into real-life decision-making — the ethical, moral, and other human considerations that guide the course of business, life, and society at large.

Consider the “trolley problem” — a hypothetical social scenario, formulated long before AI came into being, in which a decision has to be made whether to alter the route of an out-of-control streetcar heading towards a disaster zone. The decision that needs to be made — in a split second — is whether to switch from the original track where the streetcar may kill several people tied to the track, to an alternative track where, presumably, a single person would die.

Read more