Presenting adversarial.js Tool: An Interactive, In-Browser Demonstration Of Adversarial Attacks On Neural Networks

Kenny Song, a graduate student at the University of Tokyo, developed adversarial.js, an interactive tool that shows how adversarial attacks function using Tensorflow.js.  

There has been increasing attention to the threat that adversarial attacks cause to cybersecurity systems. Some adversarial attack examples include glasses that fool facial recognition systems and the stickers pasted on stop signs causing computer vision systems to mistake them for speed limits. 

AI researchers and cybersecurity professionals are now making efforts to educate people about adversarial attacks and create robust machine learning systems. Recently, adversarial.js was released on GitHub to raise awareness about machine learning security through the project.

Framing an Adversarial example

Adversarial.js is written in Tensorflow.js, which is the JavaScript version of Google’s famous deep learning framework. It is a lightweight demo that can run on a static webpage. Everything comes as JavaScript on the page, making it easy for users to inspect and play around with the code directly in the browser.

Kenny Song has released a website for demo for hosting adversarial.js. Users can choose a target deep learning model and a sample image to craft an adversarial attack. Before applying the malicious modifications, they can run the image through the neural network to see how it is classified.

As adversarial.js shows, a well-trained ML model can predict an image’s correct label with great accuracy. Creating an adversarial example is the next step to modify the image. While humans cannot figure out the changes, the altered images cause the targeted ML model to change its output. After choosing a target label and an attack technique and clicking “Generate,” adversarial.js creates a new version of the imperceptibly modified image. Depending on the user’s method, these modifications can be more or less apparent to the naked eye.

Adversarial attacks are not a precise science, which is one thing that adversarial.js displays very well. If one plays around with the tool for some time, they can see that the adversarial techniques do not work consistently in many cases. In some instances, the perturbation does not cause any changes in the ML model’s output but instead causes it to lower its confidence in the main label.

The threat of adversarial attacks

Today, machine learning (ML) models have become an essential component in many applications running on computers, phones, home security cameras, smart fridges, smart speakers, and many other devices. Adversarial vulnerabilities make these ML systems unpredictable in unusual environments. Therefore it is crucial to understand the threats these malicious attacks extend to the ML systems.

Kenny Song hopes that this project can help people understand these risks and motivate them to invest resources to address them.

Github: https://github.com/kennysong/adversarial.js

Demo: https://kennysong.github.io/adversarial.js/