Equality of Opportunity metrics
Last updated
Last updated
The idea of equality of opportunity metrics, is to compare the prediction error the model makes across groups. We define the average score as the sample mean of the outputs for that specific group:
RMSE Ratio: We can compare the RMSE across groups. If the algorithm is unbiased, the value should be around 1.
Concurrent Validity Spread: We can calculate the Pearson correlation coefficient between the observed predictions and actual labels, and see how this value compares across groups.
where is the Pearson correlation coefficient for group .
These values can be calculated for the whole population or just the top 20% subjects.
An example of how to measure bias in a regression problem in recruitment can be found in our notebook, which can be accessed here or downloaded as the following file: