Roadmaps for risk mitigation
  • Risk mitigation roadmaps
  • Mitigation Roadmaps
    • Improving generalization through model validation
      • Step 1: Estimating generalization
      • Step 2: Model validation for hyperparameters tuning
      • Step 3: Performing algorithmic selection
      • Additional Material
    • Hyperparameter Optimisation
      • Step 1: Validation
      • Step 2: Hyperparameter Search
      • Additional Considerations
    • Handling dataset shift
      • Step 1: Understanding dataset shifts
      • Step 2: Detecting dataset shifts
      • Step 3: Handling dataset shifts
      • Additional Material
    • Adversarial training for robustness
      • Step 1: Understanding adversarial examples
      • Step 2: Finding adversarial examples
      • Step 3: Defending against adversarial examples
      • Additional Material
    • Data Minimization techniques
      • Step 1: Understanding the data minimization principle
      • Step 2: Data minimization techniques for Supervised Learning
        • Option 1: Reducing features
        • Option 2: Reducing data points
      • Step 3: Other privacy-preserving techniques
      • Additional Material
    • Measuring Bias and Discrimination
      • Step 1: Understanding bias
      • Step 2A: Measuring Bias for Classification tasks
        • Equality of Outcome metrics
        • Equality of Opportunity metrics
      • Step 2B: Measuring Bias in Regression tasks
        • Equality of Outcome metrics
        • Equality of Opportunity metrics
      • Additional Material
    • Mitigating Bias and Discrimination
      • Step 1: Understanding bias
      • Step 2: Mitigating Bias
        • Option 1: Pre-processing
        • Option 2: In-processing
        • Option 3: Post-Processing
      • Additional Material
    • Documentation for improved explainability of Machine Learning models
      • Step 1: Datasheets for Datasets
      • Step 2: Model Cards for Model Reporting
      • Additional Material
    • Extracting Explanations from Machine Learning Models
      • Step 1: Understanding algorithmic explainability
      • Step 2: In-processing methodologies for Explainability
      • Step 3: Post-processing methodologies for Explainability
      • Additional Material
Powered by GitBook
On this page
  1. Mitigation Roadmaps
  2. Mitigating Bias and Discrimination
  3. Step 2: Mitigating Bias

Option 3: Post-Processing

PreviousOption 2: In-processingNextAdditional Material

Last updated 3 years ago

If we decide to mitigate bias at the post-processing stage, we will have to take the predictions from the model and perform a separate process to reduce bias. We will need as inputs: existing model predictions, labels and group membership; with this we can create a new set of less biased predictions, for example using model optimization. Notice that in order to do this you will need access to protected attributes at the point of inference, which is not always possible.

proposes the following post-processing mitigation techniques:

  • Equalized Odds Postprocessing, which alters output labels to optimize equalized odds. It uses linear programming in order to find probabilities with which to modify the labels (, ).

  • Calibrated Equalized Odds Postprocessing, which optimizes over calibrated model outputs to find probabilities with which to alter output labels with an equalized odds objective(, ).

  • Reject Option Classification, which gives favorable outcomes to unprivileged groups and unfavorable outcomes to privileged groups in a confidence band around the decision boundary with the highest uncertainty ().

An example of how to do this in a binary classification problem in recruitment can be found on our notebook , or downloading the following file:

aif360
Pleiss 2017
Hardt 2016
Pleiss 2017
Hardt 2016
Kamiran 2012
here
66KB
Post_processing_Bias_mitigation.ipynb