Six Steps to Responsible AI in the Federal Government

Six Steps to Responsible AI in the Federal Government

There is widespread agreement that responsible artificial intelligence requires principles such as fairness, transparency, privacy, human safety, and explainability. Nearly all ethicists and tech policy advocates stress these factors and push for algorithms that are fair, transparent, safe, and understandable.[1]

But it is not always clear how to operationalize these broad principles or how to handle situations where there are conflicts between competing goals.[2] It is not easy to move from the abstract to the concrete in developing algorithms and sometimes a focus on one goal comes at the detriment of alternative objectives.[3]

In the criminal justice area, for example, Richard Berk and colleagues argue that there are many kinds of fairness and it is “impossible to maximize accuracy and fairness at the same time, and impossible simultaneously to satisfy all kinds of fairness.”[4] While sobering, that assessment likely is on the mark and therefore must be part of our thinking on ways to resolve these tensions.

Read more