Roadmaps for risk mitigation
  • Risk mitigation roadmaps
  • Mitigation Roadmaps
    • Improving generalization through model validation
      • Step 1: Estimating generalization
      • Step 2: Model validation for hyperparameters tuning
      • Step 3: Performing algorithmic selection
      • Additional Material
    • Hyperparameter Optimisation
      • Step 1: Validation
      • Step 2: Hyperparameter Search
      • Additional Considerations
    • Handling dataset shift
      • Step 1: Understanding dataset shifts
      • Step 2: Detecting dataset shifts
      • Step 3: Handling dataset shifts
      • Additional Material
    • Adversarial training for robustness
      • Step 1: Understanding adversarial examples
      • Step 2: Finding adversarial examples
      • Step 3: Defending against adversarial examples
      • Additional Material
    • Data Minimization techniques
      • Step 1: Understanding the data minimization principle
      • Step 2: Data minimization techniques for Supervised Learning
        • Option 1: Reducing features
        • Option 2: Reducing data points
      • Step 3: Other privacy-preserving techniques
      • Additional Material
    • Measuring Bias and Discrimination
      • Step 1: Understanding bias
      • Step 2A: Measuring Bias for Classification tasks
        • Equality of Outcome metrics
        • Equality of Opportunity metrics
      • Step 2B: Measuring Bias in Regression tasks
        • Equality of Outcome metrics
        • Equality of Opportunity metrics
      • Additional Material
    • Mitigating Bias and Discrimination
      • Step 1: Understanding bias
      • Step 2: Mitigating Bias
        • Option 1: Pre-processing
        • Option 2: In-processing
        • Option 3: Post-Processing
      • Additional Material
    • Documentation for improved explainability of Machine Learning models
      • Step 1: Datasheets for Datasets
      • Step 2: Model Cards for Model Reporting
      • Additional Material
    • Extracting Explanations from Machine Learning Models
      • Step 1: Understanding algorithmic explainability
      • Step 2: In-processing methodologies for Explainability
      • Step 3: Post-processing methodologies for Explainability
      • Additional Material
Powered by GitBook
On this page
  1. Mitigation Roadmaps
  2. Mitigating Bias and Discrimination

Step 1: Understanding bias

PreviousMitigating Bias and DiscriminationNextStep 2: Mitigating Bias

Last updated 3 years ago

We define bias as an unwanted prejudice in the decisions made by an AI system that are systematically disadvantageous to a person or group. Multiple types of bias exist, and can be unknowingly introduced in algorithms at any stage of the development process, whether during data generation or model building. Refer to this page to learn about different types of bias.

In order to measure whether a system treats different groups of people equally, we need to agree on a definition of equality:

  • Equality of Outcome: If we select this definition, we ask that all subgroups have equal outcomes. For example, in a recruitment context, we may require that the percentage of applicants hired is consistent across groups (e.g. we want to hire 5% of all female applicants and 5% of all male applicants). Mathematically, this means that the likelihood of a positive outcome is equal for members of each group (regardless of the ground-truth labels):P(Y^=1∣G=0)=P(Y^=1∣G=1).P(\hat Y=1|G=0)=P(\hat Y=1|G=1).P(Y^=1∣G=0)=P(Y^=1∣G=1).

  • Equality of Opportunity: If we select this definition, we ask that all subgroups are given the same opportunity of outcomes. For example, if we have a , we may want the classifier to perform equally well for all ethnicities and genders. Mathematically, the probability of a person in the positive class being correctly assigned a positive outcome and the probability of a person in a negative class being incorrectly assigned a positive outcome should both be the same for privileged and unprivileged group members:P(Y^=1∣G=0,Y=y)=P(Y^=1∣G=1,Y=y),y∈{0,1}.P(\hat Y=1|G=0, Y=y)=P(\hat Y=1|G=1, Y=y), \quad y\in\{0,1\}.P(Y^=1∣G=0,Y=y)=P(Y^=1∣G=1,Y=y),y∈{0,1}. Notice that here, ground-truth labels are important and necessary.

In this guide, we are mainly concerned with treating different groups of people equally. However, there are applications where we may have to deal with biases against individuals. You can find more information on how to define bias in these instances here.

We will next cover how to measure bias in different situations. You can click for measuring bias for classification tasks, and for measuring bias in regression.

face recognition algorithm
here
here