Equality of Opportunity metrics

The idea of equality of opportunity metrics, is to compare the prediction error the model makes across groups. We define the average score yˉg\bar y_g as the sample mean of the outputs for that specific group:

  • RMSE Ratio: We can compare the RMSE across groups. If the algorithm is unbiased, the value should be around 1.

RMSE ratio=RMSEminRMSEmajRMSE \ ratio= \frac{RMSE_{min}}{RMSE_{maj}}
  • Concurrent Validity Spread: We can calculate the Pearson correlation coefficient between the observed predictions and actual labels, and see how this value compares across groups.

CV Spread=rminrmin,CV \ Spread=r_{min}-r_{min},

where rgr_g is the Pearson correlation coefficient for group gg.

These values can be calculated for the whole population or just the top 20% subjects.

An example of how to measure bias in a regression problem in recruitment can be found in our notebook, which can be accessed here or downloaded as the following file:

Last updated