Option 2: In-processing

If we decide to mitigate bias at the in-processing stage, we will have to change the model architecture and/or optimisation process. The amended models will be able to maximise predictive accuracy while at the same time taking into account equality. In-processing techniques are not model agnostic, since they require an understanding of the workings of the model. Many of these methods are often used for neural networks.

aif360 proposes the following in-processing mitigation techniques:

  • Adversarial Debiasing, where an adversary model aims to remove information about the membership of the protected group from the latent representation. (Zhang et al., 2018)

  • Exponentiated Gradient Reduction, an algorithm that converts the problem of classifying without bias into a sequence of cost-sensitive classification tasks. It returns a randomized classifier with the lowest empirical error according to the chosen bias metric (Argarwal et al 2018)

  • Grid Search Reduction, which can be used for both classification and regression. For classification, it works very similarly to the Exponentiated Gradient Reduction, but it returns the deterministic classifier with the lowest empirical error according to the chosen bias metric among the candidates searched (Argarwal et al 2018). For regression it uses the same principle to return a deterministic regressor with the lowest empirical error subject to the constraint of bounded group loss (Argarwal et al 2019).

  • Prejudice remover, which adds a regularization term to the learning objective in order to reduce bias (Kamishima et al 2012)

An example of how to do this in a binary classification problem in recruitment can be found on our notebook here, or downloading the following file:

Last updated