How Diveplane uses explainable AI to bolster AI adoption

ExplainableAI
ExplainableAI

Black box” artificial intelligence (AI) systems are designed to automate decision-making, mapping a user’s features into a class predicting individual behavioral traits such as credit risk, health status, and so on, without revealing why. This is problematic, not only because of the lack of transparency, but also because of potential biases inherited by algorithms from human prejudices or any hidden elements in the training data that may result in unfair or incorrect decisions.

As AI continues to proliferate, there is an increasing need for technology companies to demonstrate the ability to trace back through the decision-making process, a functionality called explainable AI. This would essentially help them understand why a certain prediction or decision was made, what the important factors were in making that prediction or decision, and how confident the model is in that prediction or decision.

To help instill user confidence that operational decisions are built on a foundation of fairness and transparency, Diveplane claims its products are designed around three principles: predict, explain and show.

Explosive growth in the AI software market

Raleigh, North Carolina-based Diveplane today announced that it has raised $25 million in series A funding to bolster its position in the AI software market and invest further in its explainable AI solutions that provide fair and transparent decision-making and data privacy.

Read more