UCSD Researchers Develop An Artificial Neuron Device That Could Reduce Energy Use and Size of Neural Network Hardware

Researchers at the University of California San Diego developed a novel artificial neuron device, with the help of which training neural networks to perform tasks like image recognition or self-driving car navigation could require less computer power and hardware. The gadget uses 100 to 1000 times less energy and space than current CMOS-based technology to perform neural network computations. The work has been published in a paper in Nature Nanotechnology.

The basic idea behind Neural networks is that each layer’s output is fed as the input to the next layer. And to generate those inputs, a non-linear activation function is required. However, because this function entails transmitting data back and forth between two different units – the memory and an external processor – it necessitates a significant amount of computational power and circuitry.

The researchers have now built a nanometer-sized device that can perform the activation function well. Duygu Kuzum, a professor at the UC San Diego Jacobs School of Engineering, says that Neural network computations in hardware get highly inefficient as the neural network models get more extensive and more complex. So, they’ve developed a single nanoscale artificial neuron device that implements these computations in hardware in a very area- and energy-efficient way.


The device implements a rectified linear unit, which is one of the most frequent activation functions used in neural network training. This function is unique in that it requires circuitry that can adjust resistance gradually. Thus, the device is designed in such a way that it can slowly transition from an insulating to a conducting state with the help of a small amount of heat.

This transition is known as a Mott transition. It takes place in a vanadium dioxide layer that is only a few nanometers thick. A titanium and gold nanowire heater sits on top of this layer. The vanadium dioxide layer progressively heats up as current flows through the nanowire, causing a steady, controlled transition from insulating to conducting. The researchers initially created an array of these so-called activations (or neuron) devices, as well as a synaptic device array. They then joined the two arrays together on a custom printed circuit board to build a hardware version of a neural network.

The researchers experimented with the network to process an image. The network used edge detection, a method of image processing that identifies the outlines or edges of objects in a picture. This experiment revealed that the integrated hardware system could efficiently execute convolution operations, which are required by many deep neural networks.

According to the researchers, the technology might be scaled up to do more complicated jobs in self-driving cars, such as facial and object identification. This might happen, according to Kuzum, if the industry shows interest and collaborates.

Paper: https://www.nature.com/articles/s41565-021-00874-8

Source: https://ucsdnews.ucsd.edu/pressrelease/artificial-neuron-device-could-shrink-energy-use-and-size-of-neural-network-hardware

Shilpi is a Contributor to Marktechpost.com. She is currently pursuing her third year of B.Tech in computer science and engineering from IIT Bhubaneswar. She has a keen interest in exploring latest technologies. She likes to write about different domains and learn about their real life applications.

🐝 [FREE AI WEBINAR] 'Beginners Guide to LangChain: Chat with Your Multi-Model Data' Dec 11, 2023 10 am PST