Step 1: Understanding bias

We define bias as an unwanted prejudice in the decisions made by an AI system that are systematically disadvantageous to a person or group. Multiple types of bias exist, and can be unknowingly introduced in algorithms at any stage of the development process, whether during data generation or model building. Refer to this page to learn about different types of bias.

In order to measure whether a system treats different groups of people equally, we need to agree on a definition of equality:

In this guide, we are mainly concerned with treating different groups of people equally. However, there are applications where we may have to deal with biases against individuals. You can find more information on how to define bias in these instances here.

We will next cover how to measure bias in different situations. You can click here for measuring bias for classification tasks, and here for measuring bias in regression.

Last updated