Microsoft, in collaboration with MITRE research organization and a dozen other organizations, including IBM, Nvidia, Airbus, and Bosch, has released the Adversarial ML Threat Matrix, a framework that aims to help cybersecurity experts prepare attacks against artificial intelligence models.
With AI models being deployed in several fields, there is a rise in critical online threats jeopardizing their safety and integrity. The Adversarial Machine Learning (ML) Threat Matrix attempts to assemble various techniques employed by malicious adversaries in destabilizing AI systems.
AI models perform several tasks, including identifying objects in images by analyzing the information they ingest for specific common patterns. The researchers have developed malicious patterns that hackers could introduce into the AI systems to trick these models into making mistakes. An Auburn University team had even managed to fool a Google LLC image recognition model into misclassifying objects in photos by slightly adjusting the objects’ position in each input image.
The organizations have contributed a collection of adversarial AI system vulnerabilities and hacking tactics to the Adversarial ML Threat Matrix that helps to investigate and overcome online attacks. One sample demonstrates a method of targeting AI models with malicious input data. Another set includes a scenario where attackers manage to replicate an AI that enables the attacker to find weak points in the neural network.
Companies can use this framework to test their AI models’ resilience by mimicking practical attack scenarios. Cybersecurity analysts can also use the framework to familiarize themselves with the threats their organizations’ systems could face in the near future.
Microsoft says that the framework’s objective is to position attacks on ML systems in a framework that security professionals can orient themselves in these new and upcoming threats.