How to ensure trust and ethics in AI

How to ensure trust and ethics in AI

A pragmatic and direct approach to ethics and trust in artificial intelligence (AI) — who would not want that? This is how Beena Ammanath describes her new book, Trustworthy AI.

Ammanath is the executive director of the Global Deloitte AI Institute. She has had stints at GE, HPE and Bank of America, in roles such as vice president of data science and innovation, CTO of artificial intelligence and lead of data and analytics.

AI we can trust

Oftentimes discussions about ethics and trust in AI have a very narrow focus, limited to fairness and bias, which can be frustrating as a professional in the industry, explained Ammanath. While fairness and bias are relevant aspects, Ammanath says that they aren’t the only aspects or even the most important ones. There’s a lot of nuance there and that is part of what Ammanath sets out to address.

What should be talked about when we discuss AI ethics, then? That can be a daunting question to contemplate. For organizations that are not interested in philosophical quests, but in practical approaches, terms such as “AI ethics” or “responsible AI” can feel convoluted and abstract.

The term “trustworthy AI” has been used in places ranging from the EU Commission to IBM and from the ACM to Deloitte. In her book, Ammanath lists and elaborates on the multiple dimensions she sees that are collectively defining trustworthy AI as it fits these criteria.