The rapid rise in machine learning applications in criminal justice, hiring, healthcare, and social service intentions substantially impacts society. These wide applications have heightened concerns about their potential functioning amongst Machine Learning and Artificial Intelligence researchers. New methods and established theoretical bounds have been developed to improve the performance of ML systems. With such progress, it becomes necessary to understand how these methods and bounds translate into policy decisions and impact society. The researchers continue to thrive for impartial and precise models that can be used in diverse domains.
One deep-rooted conjecture is that there is a trade-off between accuracy and fairness while using Machine Learning systems. The accuracy here refers to the correctness of the model’s prediction relative to the task at hand rather than the specific statistical property. The ML predictor is termed unfair if it treats people incongruously based on sensitive or protected attributes (racial minorities, economically disadvantaged). In order to handle this, adjustments are made to data, labels, model training, scoring systems, and other aspects related to the ML system. However, such changes tend to make the system less accurate.
The researchers at Carnegie Mellon University claim that this trade-off is negligible in practice across various policy domains through their study published in Nature Machine Intelligence. This study focuses on testing the assumed fairness-accuracy trade-offs in resource allocation problems.
The researchers focused on circumstances where the in-demand resources are scanty, and Machine Learning systems were employed to allocate these resources. The emphasis was on the following four areas:
- Prioritizing limited mental health care outreach based on a person’s risk of returning to jail to reduce re-incarceration.
- Predicting severe safety violations.
- Modeling the risk of students not graduating high school in time to recognize those in need of support.
- Helping faculty reach crowdfunding goals for classroom needs.
In each of these settings, it is observed that optimized models could effectively predict the outcomes but indicated a considerable disparity in recommendations for interventions. However, when the adjustments are implemented, inconsistency based on race, age, or income could be dealt with without loss of accuracy.
All these results suggest that there is no need for new and complex machine learning methods or an immense sacrifice of accuracy contrary to what is assumed. Instead, defining fairness goals upfront and making design decisions based on the needs would be the first steps to achieving this objective.
This research aims to inform fellow researchers and policymakers that the commonly held belief regarding the trade-off is not necessarily true if one deliberately designs systems to be fair and equitable.
Machine Learning, Artificial Intelligence, and Computer Science communities need to start designing systems that maximize accuracy and fairness and embrace machine learning as a decision-making tool.