Fairness in AI and Machine Learning – Part II
By Svetlana Borovkova, Head of Quantitative Modelling
In my previous column, I discussed the bias in machine learning algorithms (i.e., (un)favourable treatment of individuals based on their race, gender or other protected attributes) and pointed out that such bias can be damaging for financial institutions, especially in light of the current regulation.
Today, I would like to give some practical insights into how the bias in ML algorithms can be measured and where in your modelling process it can be eliminated.
Measurement of the bias is the first important step: an algorithm can be ‘very’ biased (think of the Apple/GS credit card example from my last column) or a ‘little bit’ biased (and a small bias could be something you can live with). There are three ‘forma’ definitions of the model’s fairness: Independence, Separation and Sufficiency, and hence, three ways of measuring bias.