Identifying and recognizing faces is a fundamental skill for social interaction. This ability is assumed to be derived from single or multi-neuronal neuronal tuning. Face-selective neurons selectively respond to faces and are thought to represent the building blocks of face identification.
The discovery of this fascinating neuronal tuning in the brain has piqued the interest of neuroscientists, who are now debating whether face-selective neurons can arise innately in the brain or require visual experience and whether neuronal tuning to faces is a distinct type of function from tuning to other visual objects.
A KAIST research team shows that facial image visual selectivity can emerge even in deep neural networks that have never been trained.
This novel discovery has shed light on the principles underpinning the evolution of cognitive capabilities in biological and artificial neural networks. It shows that neuronal processes specific to facial images may be observed in randomly initialized deep neural networks in the absence of learning. They have the same features as those seen in real brains.
The researchers discovered that face-selectivity could emerge spontaneously from random feedforward wirings in untrained deep neural networks using AlexNet45. This model neural network captures features of the ventral stream of the visual cortex. The researchers demonstrated that the nature of this intrinsic face-selectivity is similar to that of face-selective neurons in the brain. This spontaneous neuronal tuning for faces allows the network to perform face detection tasks.
These findings suggest that in scenarios where the random feedforward connections that form in early, untrained networks may be sufficient to initialize primitive visual cognitive capabilities. Even in the absence of learning, intrinsic cognitive processes could develop spontaneously from the statistical complexity encoded in the hierarchical feedforward projection circuitry.