Researchers At Skoltech Institute Explain How Turing-Like Patterns Cause Neural Networks To Make Mistakes

Although intelligent and adept at image recognition and classification, deep neural networks can still be vulnerable to adversarial perturbations, i.e., small but queer details in an image that causes errors in neural network output. Some of these are universal. They tend to interfere with the neural network when placed on any input.

A research paper presented at the 35th AAAI Conference on Artificial Intelligence by researchers at Skoltech demonstrated that patterns that cause neural networks to make mistakes in image recognition are, in fact, similar to Turing patterns found all around the Neural network world. This result can help design defenses for pattern recognition systems that are currently susceptible to attacks.

Adversarial perturbations represent a significant security risk: for example, in 2018, a team of researchers published a preprint describing how to deceive self-driving vehicles into recognizing benign ads and logos on them as road signs.

Professor Ivan Oseledets and his colleagues at the Skoltech Computational Intelligence Lab further explored a theory that establishes a connection between Universal Adversarial Perturbations (UAPs) and classical Turing patterns. Turing patterns were first described by the well-known English mathematician Alan Turing as the driving mechanism behind many patterns in nature, such as stripes and spots on animals.

The research started accidentally when Oseledets and Valentin Khrulkov presented a paper on generating UAPs at a Conference in 2018. According to them, it was a stranger who told them that these patterns look like Turing Patterns. 

The nature and origins of adversarial perturbations are still a mystery. Oseledets comments that a reason why adversarial attacks are hard to defend against is the lack of theory. Their work takes a step towards explaining the exciting properties of UAPs by Turing patterns, which have a solid idea behind them. Their work will help in constructing a theory of adversarial examples in the future.

According to Oseledets, the easiest way to make robust models based on such patterns is to add them to images and then train the network on perturbed images. Prior research shows that natural Turing patterns can fool a neural network, and the team demonstrated this connection more straightforwardly.

Paper: https://arxiv.org/abs/1801.02780

Source: https://techxplore.com/news/2021-03-team-turing-like-patterns-neural-networks.html

Consultant Intern: Kriti Maloo is currently pursuing her B.Tech from Indian Institute of Technology (IIT) Bhubaneswar. She is interested in Data Analytics and its applications in various domains. She is a Bibliophile and loves to explore new advancements in the field of technology.

↗ Step by Step Tutorial on 'How to Build LLM Apps that can See Hear Speak'