AI Fairness in Financial Services:
How to quantify and improve fairness in AI and Machine Learning?
By Alexandru Giurca / January 2021
Artificial Intelligence and Machine Learning applications are increasingly used in financial services. However, they can exhibit unintentional bias against certain groups of clients, e.g., based on race, age or gender. So it is important that ML algorithms implemented and validated properly, before material decisions can be made with their aid. Failing to do that may lead to financial institutions to be exposed to regulatory risk, but also to reputation damage.
Bias in ML algorithms can arise due to several reasons. Algorithms can incorporate human biases that are reflected in the data that they are trained on – even if sensitive variables such as gender, race, or sexual orientation are removed. These human biases represented in society can infiltrate algorithms along the entire development pipeline – from the data collection and the choice of training data, the algorithm design to its deployment.
To achieve fairness in ML algorithms, first it must be measured. The measurement of fairness starts with recognizing which sensitive clients’ attributes can be affected and which definition of fairness one should use. Once the protected attributes and fairness definition are chosen, the algorithm’s fairness should be continuously measured through the entire development pipeline: from the data selection stage, its pre-processing to training and testing of the algorithm and its deployment.
If a bias against a protected attribute is found, it can be removed at three places in the development pipeline: debiasing the training data, using fairness constraints during algorithm training or adjusting the algorithm performance to make it fairer when it is applied. The choice of the debiasing method depends on whether one has the access to the training data and the algorithm itself or whether the model is delivered as a black box.
Finally, there is a trade-off between algorithm performance and fairness: mitigating the bias usually leads to some decline in the model’s performance. However, with modern debiasing techniques (provided they are properly chosen for the specific use case and the available data), the model performance will not be sacrificed significantly.