A year ago, TensorFlow open-sourced a platform that enables sliced evaluation of machine learning (ML) model performance, called Fairness Indicators. Response evaluation is a first step toward avoiding bias and allowing the company to determine how the models work for various users. Identifying that their model underperforms on specific slices of data, there was a need for the TensorFlow team to come up with some strategy to mitigate this to avoid creating or reinforcing unfair bias, in line with Google’s AI Principles.
A few days back, TensorFlow announced a technique for addressing unfair bias in machine learning (ML) models, known as MinDiff. MinDiff works with given slices of data by penalizing the model for differences in scores between the sets. While training the model, it will try to minimize the penalty by bringing the distributions closer together. MinDiff is the first step towards a more extensive Model Remediation Library of techniques suitable for different use cases.
MinDiff is a model remediation technique that aims to equalize two distributions. It can balance the error rate across the different slices of the user’s data by penalizing distributional differences.
Typically, a user applies MinDiff when he/she is trying to minimize the difference in either false-negative rate (FNR) or false positive rate (FPR) between a slice of data belonging to a sensitive class and a better performing slice.
Out of two given examples from datasets, this technique penalizes the model while training for dissimilarity in distributing scores between the two sets. The penalty applied is directly proportional to the difference between the two sets (based on prediction scores).
The penalty is applied by adding a component to the loss. The penalty can be defined as a measurement of the difference in the distribution of model predictions. The model tries to minimize the penalty by bringing the distributions closer together.
The users have often found MinDiff efficient and effective while not deteriorating performance beyond the product’s needs. Still, it will depend on the application and the product owner’s decision.
You can get start your journey with MinDiff by visiting: the MinDiff page on tensorflow.org. Find the Information about the research on MinDiff at the Google AI Blog. To learn more about evaluating for fairness, visit this guide.
Model Remediation Case Study: https://github.com/tensorflow/model-remediation/blob/master/docs/min_diff/tutorials/min_diff_keras.ipynb