The Backpack That Solves ChatGPT’s Bias: Backpack Language Models Are Alternative AI Methods for Transformers

AI language models are becoming an essential part of our lives. We have been using Google for decades to access information, but now, we are slowly switching to ChatGPT. It provides concise answers, clear explanations, and it is usually quicker to find the information we seek. 

These models learn from the data we produced over the years. As a result, we transferred our biases to the AI models, and this is a topic of debate in the domain. One particular bias that has gained attention is the gender bias in pronoun distributions, where models tend to prefer gendered pronouns such as “he” or “she” based on the context. 

✅ [Featured Article] LLMWare.ai Selected for 2024 GitHub Accelerator: Enabling the Next Wave of Innovation in Enterprise RAG with Small Specialized Language Models

Addressing this gender bias is crucial for ensuring fair and inclusive language generation. For example, if you start the sentence “The CEO believes that…”, the model continues with he, and if you replace the CEO with the nurse, the next token becomes she. This example serves as an interesting case study to examine biases and explore methods to mitigate them.

It turns out that the context plays a crucial role in shaping these biases. By replacing CEO with a profession stereotypically associated with a different gender, we can actually flip the observed bias. But here’s the challenge: achieving consistent debiasing across all the different contexts where CEO appears is no easy task. We want interventions that work reliably and predictably, regardless of the specific situation. After all, interpretability and control are key when it comes to understanding and improving language models. Unfortunately, the current Transformer models, while impressive in their performance, don’t quite meet these criteria. Their contextual representations introduce all sorts of complex and nonlinear effects that depend on the context at hand.

So, how can we overcome these challenges? How can we tackle the bias we introduced in large language models? Should we improve transformers, or should we come up with new structures? The answer is Backpack Language Models.

Backpack LM tackles the challenge of debiasing pronoun distributions by leveraging non-contextual representations known as sense vectors. These vectors capture different aspects of a word’s meaning and its role in diverse contexts, giving words multiple personalities.

Overview of Backpack LM. Source: https://arxiv.org/pdf/2305.16765.pdf

In Backpack LMs, predictions are log-linear combinations of non-contextual representations, referred to as sense vectors. Each word in the vocabulary is represented by multiple sense vectors, encoding distinct learned aspects of the word’s potential roles in different contexts. 

These sense vectors specialize and can be predictively useful in specific contexts. The weighted sum of sense vectors for words in a sequence forms the Backpack representation of each word, with the weights determined by a contextualization function that operates on the entire sequence. By leveraging these sense vectors, Backpack models enable precise interventions that behave predictably across all contexts. 

This means that we can make non-contextual changes to the model that consistently influences its behavior. Compared to Transformer models, Backpack models offer a more transparent and manageable interface. They provide precise interventions that are easier to understand and control. Moreover, Backpack models don’t compromise on performance either. In fact, they achieve results on par with Transformers while offering enhanced interpretability. 

Example of sense vectors. Source: https://backpackmodels.science/

Sense vectors in Backpack models encode rich notions of word meaning, outperforming word embeddings of state-of-the-art Transformer models on lexical similarity tasks. Additionally, interventions on sense vectors, such as reducing gender bias in professional words, demonstrate the control mechanism offered by Backpack models. By downscaling the sense vector associated with gender bias, significant reductions in contextual prediction disparities can be achieved in limited settings.


Check Out The Paper and Project. Don’t forget to join our 24k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com


🚀 Check Out 100’s AI Tools in AI Tools Club

Ekrem Çetinkaya received his B.Sc. in 2018, and M.Sc. in 2019 from Ozyegin University, Istanbul, Türkiye. He wrote his M.Sc. thesis about image denoising using deep convolutional networks. He received his Ph.D. degree in 2023 from the University of Klagenfurt, Austria, with his dissertation titled "Video Coding Enhancements for HTTP Adaptive Streaming Using Machine Learning." His research interests include deep learning, computer vision, video encoding, and multimedia networking.

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...