Everyone Wants Responsible Artificial Intelligence

ResponsibleAI
ResponsibleAI

As artificial intelligence continues to gain traction, there has been a rising level of discussion about “responsible AI” (and, closely related, ethical AI). While AI is entrusted to carry more decision-making workloads, it’s still based on algorithms that respond to models and data, as I and my co-author Andy Thurai explain in a recent Harvard Business Review article. As a result, AI and often misses the big picture and most times can’t analyze the decision with reasoning behind itIt certainly isn’t ready to assume human qualities that emphasize empathy, ethics, and morality.

Is this a concern that is shared within the executive suites of companies deploying AI? Yes, a recent study of 1,000 executives published by MIT Sloan Management Review and Boston Consulting Group confirms. However, the study finds, while most executives agree that “responsible AI is instrumental to mitigating technology’s risks — including issues of safety, bias, fairness, and privacy — they acknowledged a failure to prioritize it.” In other words, when it comes to AI, it’s damn the torpedoes and full speed ahead. However, more attention needs to paid to those torpedoes, which may take the form of lawsuits, regulations, and damaging decisions. At the same time, more adherence to responsible AI may deliver tangible business benefits.

“While AI initiatives are surging, responsible AI is lagging,” the MIT-BCG survey report’s authors, Elizabeth M. Renieris, David Kiron, and Steven Mills, report. “The gap increases the possibility of failure and exposes companies to regulatory, financial, and customer satisfaction risks.”

Read more