Measuring Bias and Discrimination

Why does this matter?

Machine learning can be used in critical applications, like recruitment or the judicial system. In these cases, it is especially important to ensure that algorithms do not discriminate and treat everyone equally. Sometimes, however, it is possible that a model presents hidden prejudice in its decision-making process. If this results in disadvantages to an individual or a group of individuals, we say that the algorithm presents bias.

Let’s take the example of a face recognition algorithm. If our training data is made up primarily of light-skinned male subjects, our algorithm will not be able to correctly classify females and minority groups. This is an example of a biased algorithm. In fact, according to a study performed in 2018, three popular facial recognition systems appeared to be biased and misclassify up to 34.7% of dark-skinned females as opposed to 0.8% of lighter-skinned males. The results of this study are explained here.

This Roadmap

This guide will help you mitigate bias and, as a result, discrimination. Firstly, we will define bias. Secondly, we will address how to measure bias.

Last updated