How to trust systems with AI inside

trust system ai
trust system ai
  • As self-learning systems become responsible for making more decisions over people and environment, ensuring the safe use of AI is a priority.
  • The upcoming EU AI Act will likely set a de facto global standard for how to regulate the use of AI.
  • The aim of DNV’s Recommended Practice for “Assurance of AI-enabled Systems” is to demonstrate conformity to the EU AI Act.

Artificial Intelligence (AI) technologies have vast potential to advance business, improve lives, and tackle global challenges. Being able to learn from data, they offer opportunities for self-learning systems as new data becomes available. This ability to dynamically learn and improve performance offers advantages and opportunities not easy to achieve with conventional software programs.

Early uses of AI focused primarily on systems like chatbots and automated consumer recommendation systems not considered to pose a high risk if the AI failed to make good decisions. However, as self-learning systems become more and more responsible for making decisions that may ultimately affect the safety of personnel, assets or the environment, the need to ensure the safe use of AI has become a priority.

AI also creates new ethical challenges. Being data-driven, AI may reinforce unethical behaviour or bias represented by the data. Unintended usage like reward hacking and deepfakes highlight the need of addressing the ethical and responsible use of AI.

Read more