How Do You Define Unfair Bias in AI?

bias ai
bias ai

Art is subjective and everyone has their own opinion about it. When I saw the expressionist painting Blue Poles, by Jackson Pollock, I was reminded of the famous quote by Rudyard Kipling, “It’s clever, but is it Art?” Pollock’s piece looks like paint messily spilled onto a drop sheet protecting the floor. The debate of what constitutes art has a long history that will probably never be settled, there is no definitive definition of art. Similarly, there is no broadly accepted objective definition for the quality of a piece of art, with the closest definition being from Orson Welles, “I don’t know anything about art but I know what I like.

Similarly, people recognize unfair bias when they see it, but it is quite difficult to create a single objective definition. That’s because there are key considerations that vary from case to case. Many have attempted to define fairness for algorithms, resulting in multiple candidate definitions. Arvind Narayanan, an associate professor at Princeton, lists 21 different definitions for fairness in an algorithm and concludes that there is no single definition that applies in all cases. In this blog post, rather than defining a unique solution, I will list the four key questions that you need to answer in order to derive a definition of unfair bias that matches your particular needs.

Which Attributes Should be Protected?

Your AI must obey the law. Most countries have discrimination laws, under which it is unlawful to treat a person less favorably on the basis of particular legally protected attributes, such as a person’s sex, race, disability, or age. However, there is no universal set of protected attributes that apply under all circumstances, in all countries, for all uses. For example, it may be illegal to discriminate on the basis of gender for employment hiring purposes but legal to charge different prices for different ages.