Roadmaps for risk mitigation
  • Risk mitigation roadmaps
  • Mitigation Roadmaps
    • Improving generalization through model validation
      • Step 1: Estimating generalization
      • Step 2: Model validation for hyperparameters tuning
      • Step 3: Performing algorithmic selection
      • Additional Material
    • Hyperparameter Optimisation
      • Step 1: Validation
      • Step 2: Hyperparameter Search
      • Additional Considerations
    • Handling dataset shift
      • Step 1: Understanding dataset shifts
      • Step 2: Detecting dataset shifts
      • Step 3: Handling dataset shifts
      • Additional Material
    • Adversarial training for robustness
      • Step 1: Understanding adversarial examples
      • Step 2: Finding adversarial examples
      • Step 3: Defending against adversarial examples
      • Additional Material
    • Data Minimization techniques
      • Step 1: Understanding the data minimization principle
      • Step 2: Data minimization techniques for Supervised Learning
        • Option 1: Reducing features
        • Option 2: Reducing data points
      • Step 3: Other privacy-preserving techniques
      • Additional Material
    • Measuring Bias and Discrimination
      • Step 1: Understanding bias
      • Step 2A: Measuring Bias for Classification tasks
        • Equality of Outcome metrics
        • Equality of Opportunity metrics
      • Step 2B: Measuring Bias in Regression tasks
        • Equality of Outcome metrics
        • Equality of Opportunity metrics
      • Additional Material
    • Mitigating Bias and Discrimination
      • Step 1: Understanding bias
      • Step 2: Mitigating Bias
        • Option 1: Pre-processing
        • Option 2: In-processing
        • Option 3: Post-Processing
      • Additional Material
    • Documentation for improved explainability of Machine Learning models
      • Step 1: Datasheets for Datasets
      • Step 2: Model Cards for Model Reporting
      • Additional Material
    • Extracting Explanations from Machine Learning Models
      • Step 1: Understanding algorithmic explainability
      • Step 2: In-processing methodologies for Explainability
      • Step 3: Post-processing methodologies for Explainability
      • Additional Material
Powered by GitBook
On this page
  1. Mitigation Roadmaps
  2. Mitigating Bias and Discrimination

Step 2: Mitigating Bias

In order to mitigate bias, it is important to remember that bias can have different causes: data, model or both.

Our training data is generally just a noisy approximation of the function our ML model is trying to learn, so if the data is not representative of the whole population (e.g. not sampling enough data from unprivileged groups) our model will fail to predict correctly in these cases. Or if our data contains historical human biases, conscious or unconscious, the model will continue to propagate those biases.

The error may also be at the level of the model. Models learn to generalize by minimizing the total prediction error. If the model is not carefully monitored and thought-out, it may simply learn to misclassify the minority group (which is less costly than misclassifying the majority group). The model should also be explicitly designed with bias-related questions in mind (e.g. which features or hyper parameters to include, choices of inputs etc).

Depending on the source of bias, we may decide to intervene to mitigate bias at one of three stages:

  • Pre-processing

  • In-processing

  • Post-processing

PreviousStep 1: Understanding biasNextOption 1: Pre-processing

Last updated 3 years ago