Researchers From Duke University Introduce The Concept Of A Variable Importance Cloud For Different Predictive Models

Researchers at Duke University recently formulated an approach to check the importance of variables for various, almost-optimal predictive models. The researchers, Jiayun Dong and Cynthia Rudin, refer to this approach as “variable importance clouds.” Their approach could ultimately help develop more reliable and better-performing machine learning algorithms for various applications and to enhance predictive models’ reliability/accuracy. 

The idea behind the given term is to take multiple models (i.e., a whole “cloud” of these models) that one can assess in terms of variable importance. The clouds can help researchers to identify variables based on their importance. Typically, the importance of one variable suggests that another variable is less important for a given model’s predictions. The “cloud” is the set of models seen through the lens of variable importance.

How to compute the importance of variables?

  • First, we calculate how important each variable is to each predictive model that is almost optimal.
  • Then we represent this model as a point in the “variable importance space,” representing the importance of its variables through the location of the point
  • Such points (one for each predictive model) collectively are known as the “variable importance cloud.”

The approach proposed now has to ensure examining the set of all good predictive models, rather than one single model. When enumerating all good predictive models is a challenging or impossible task, the researchers either use sampling techniques to add samples to the cloud or optimization techniques to delineate the cloud’s edges.

Dong explains that the shape of the variable importance cloud conveys rich information about the variables’ importance to the prediction task, much richer than approaches considering only a single model. Moreover, the upper and lower bound of each variable’s importance, the variable importance cloud, also shows the correlation between different variables’ importance. 

Variable importance clouds reveal even more information about the predictive value of different variables than previous model evaluation approaches based on standard analyses. The existing analysis methods would neglect all of the cloud’s information, except for a single point corresponding to an individual model of interest.

Based on their findings, Dong believes that “[the key suggestion] is that one should be careful not to interpret the importance of one variable to one model as its overall importance.” In the paper, this cautionary note is conveyed through an example related to criminal recidivism prediction. In this example, a model may look explicitly at the variable of race in predicting future crime but not value other variables that are correlated. Other variables, such as age and number of prior crimes, are also correlated with race due to systemic racism in society.

https://www.nature.com/articles/s42256-020-00264-0/figures/10

In conclusion, this study has found that researchers developing or using ML techniques should be careful in considering a single model as valuable for a given application, as there might be other models with comparable or better performance but those focusing on more consequential variables.  VIC could soon be applied to various fields, paving the way to understand better, and use predictive machine-learning models.

Paper: https://arxiv.org/pdf/1901.03209v2.pdf

Shilpi is a Contributor to Marktechpost.com. She is currently pursuing her third year of B.Tech in computer science and engineering from IIT Bhubaneswar. She has a keen interest in exploring latest technologies. She likes to write about different domains and learn about their real life applications.

↗ Step by Step Tutorial on 'How to Build LLM Apps that can See Hear Speak'