Altruist: A New Method To Explain Interpretable Machine Learning Through Local Interpretations of Predictive Models

Artificial intelligence (AI) and machine learning (ML) are the digital world’s trendsetters in recent times. Although ML models can make accurate predictions, the logic behind the predictions remains unclear to the users. Lack of evaluation and selection criteria make it difficult for the end-user to select the most appropriate interpretation technique. 

How do we extract insights from the models? Which features should be prioritized while making predictions and why? These questions remain prevalent. Interpretable Machine Learning (IML) is an outcome of the questions mentioned above. IML is a layer in ML models that helps human beings understand the procedure and logic behind machine learning models’ inner working.

Ioannis Mollas, Nick Bassiliades, and Grigorios Tsoumakas have introduced a new methodology to make IML more reliable and understandable for end-usersAltruist, a meta-learning method, aims to help the end-user choose an appropriate technique based on feature importance by providing interpretations through logic-based argumentation. 

The meta-learning methodology is composed of the following components: 

  1. Trained Machine Learning Model: It must produce continuous values as output. The output of this model is the input as the prediction probabilities for the next component. 
  2. Interpretation technique(s) and feature importance technique(s) 
  3. Altruist Truthful Investigator: It is responsible for investigating the truthfulness of the importance of each feature. 
  4. Argumentation System: It is responsible for determining the maximum set of truthful features and providing an explanation.
  5.  Maximum Truthful Calculator: It will interpret the results with the minimal number of untruthful features



🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...