MIT Researchers Propose A New Method To Prevent Shortcuts In Machine Learning Models By Forcing The Model To Use More Data In Its Decision-Making

You can cut down on travel time if your Uber driver takes a shortcut. On the other hand, shortcuts in machine learning models may not work as expected and lead to unpredictable outcomes along with failures.

A shortcut solution in machine learning happens when a model makes a conclusion based on a single feature of a dataset rather than understanding the underlying core of the data, which can lead to erroneous predictions. For example, instead of focusing on the more intricate forms and patterns of the cows, a model may learn to recognize photographs of cows by concentrating on the green grass that occurs in the photos.

In a new study from MIT researchers, they investigate the problem of shortcuts in machine-learning approaches and recommend a solution that might eliminate shortcut by encouraging models to use more data when making decisions. The researchers force the model to focus on more sophisticated alternatives of the data that it hadn’t considered before by removing the less complicated qualities it was focused on. Then, by asking the model to do the same job twice—once using these less complicated alternatives and then again using the sophisticated options it has now learned to establish—they reduce the inclination for shortcut options and improve the model’s efficiency. This research might increase the performance of machine learning methods that can be used to detect sickness in medical images. In this setting, shortcut alternatives might lead to incorrect diagnosis and have negative consequences for patients.

Source: (Image: Courtesy of the researchers)

Understanding shortcuts is hard

The researchers focused their investigation on contrastive learning, a sort of self-supervised machine learning that is highly successful. A model is trained on raw data that should not label individuals’ descriptions in self-supervised machine learning. It may then be applied to a more significant number of data sets with greater efficiency. A self-supervised learning model learns useful data representations that are then utilized as inputs for different tasks, such as image classification. However, if the model takes shortcuts and fails to capture relevant data, these responsibilities will be unable to utilize that data.

For example, if a self-supervised learning model is trained to classify pneumonia in X-rays from a variety of hospitals, but it learns to make predictions based on a tag that identifies the hospital the scan came from (because some hospitals have more pneumonia cases than others), the model will not perform well when data from a new hospital is presented. An encoder algorithm is trained to differentiate between similar inputs and pairs of different inputs in contrastive learning styles. This method encodes rich and complex data, such as images, so that the contrastive learning model can understand. 

The researchers tested contrastive learning encoders with a series of photos and discovered that they are also susceptible to shortcut alternatives throughout the coaching process. Encoders prefer to focus on the picture’s only options to decide which pairs of inputs are linked and dissimilar. When making a decision, the encoder should ideally focus on all of the data’s valuable qualities, according to the researcher. As a result, the team made it more difficult to distinguish between similar and different pairings and discovered that this alters which choices the encoder will consider when making a decision.

However, increasing this problem resulted in a tradeoff: the encoder became better at focusing on specific data alternatives while becoming poorer at focusing on others. The researcher claims that it appeared to overlook the less difficult choices.

To avoid this tradeoff, the researchers asked the encoder to discriminate between the pairings in the same manner it had before, using the more straightforward options. After the researchers removed the knowledge, they had previously learned. Solving the problem in each way at the same time caused the encoder to improve across all alternatives. Their method, known as implicit function modification, adaptively alters samples to remove the encoder’s less complicated possibilities for discriminating between pairs. The approach does not rely on human input, which is required because real-world data units might include many different options that can mix in sophisticated methods, according to the researcher. 

Testing From Automobiles to COPD (chronic obstructive pulmonary disease)

The researchers conducted a test of their approach using images of automobiles. To make it more difficult for the encoder to distinguish between related and different sets of images, they employed implicit function modification to change the color, orientation, and car type. The encoder’s accuracy increased in all three options—texture, shape, and shade—simultaneously.

The researchers also tested the method using samples from a medical image database of chronic obstructive pulmonary disease (COPD) to see if it could handle more complex data. Again, the strategy resulted in simultaneous improvements across all of the choices they considered.

While this research advances their understanding of the reasons for shortcut options and how to address them, the researchers believe that enhancing these tactics and applying them to various types of self-supervised learning will be crucial in the future.




Prathamesh Ingle is a Mechanical Engineer and works as a Data Analyst. He is also an AI practitioner and certified Data Scientist with an interest in applications of AI. He is enthusiastic about exploring new technologies and advancements with their real-life applications

🐝 [FREE AI WEBINAR] 'Beginners Guide to LangChain: Chat with Your Multi-Model Data' Dec 11, 2023 10 am PST