Deep learning-based Artificial Intelligence (AI) systems have a history of generating overconfident predictions, even when they are inaccurate, which can have significant repercussions. If a self-driving car firmly misidentifies the side of a tractor as a brightly illuminated sky and refuses to brake or alert the human driver, would you prefer to travel in that? I doubt it. Self-driving cars aren’t the only issue. There is a slew of additional applications where AI’s ability to convey doubt is essential. For example, if a chatbot is uncertain when a pharmacy shuts and gives a false answer, a patient may not receive the medicines they require.
Here’s where IBM’s Uncertainty Quantification 360 (UQ360) comes in to rescue the day. UQ360 allows the AI to communicate its uncertainty, making it more intellectually humble and increasing the safety of its deployment. Its goal is to provide data scientists and developers with cutting-edge algorithms for quantifying, analyzing, enhancing, and exposing the uncertainty of machine learning models.
Standard explainability approaches reveal how AI works, while UQ exposes its limitations and possible failure areas. Users of a house price prediction model want to know the model’s margin of error so they can estimate their gains or losses. A product manager may observe that an AI model predicts that a new feature A will perform better on average than a new feature B. Still, to evaluate the worst-case impact on KPIs, the manager would also need to know the margin of error in the forecasts.
Human-AI collaboration can also benefit from high-quality uncertainty estimations and effective uncertainty communication. Consider a nurse practitioner employing artificial intelligence to assist in the diagnosis of skin conditions. The nurse practitioner accepts the AI judgment if the AI’s confidence is high; otherwise, the AI suggestion is disregarded, and the patient is sent to a dermatologist. Uncertainties are employed to communicate between the AI system and the human user to attain the highest accuracy, robustness, and fairness.
We got to know what UQ is and its uses, but how can we use it?
The perfect UQ technique depends on the underlying model, machine learning job (regression vs. classification), data characteristics, and the user’s aim. A chosen UQ technique that does not generate high-quality uncertainty estimates may mislead users. As a result, before deploying an AI system, model developers must constantly review the quality of UQ and, if required, enhance the quantification quality. It’s an open-source toolkit that includes a complete collection of methods for quantifying uncertainty and features for measuring and improving UQ to speed up the development process.
Users can find a guidance link here for choosing perfect capabilities for the UQ technique based on their needs. A developer can determine an acceptable manner of communication for each UQ method given in the UQ360 Python package. You can find the guidance for the same here. For some in-depth tutorials, you can also refer to this.