Penn Engineers Develop a New Chip Using a Deep Neural Network of Optical Waveguides That Can Classify Nearly 2 Billion Images Per Second

This Article is written as a summay by Marktechpost Staff based on the paper 'An on-chip photonic deep neural network for image classification'. All Credit For This Research Goes To The Researchers of This Project. Check out the paper and post.

Please Don't Forget To Join Our ML Subreddit

Penn engineers designed a novel chip that uses a deep neural network of optical waveguides to recognize and classify an image in less than a nanosecond without needing a separate processor or memory unit.

The study published in Nature explains how the chip’s many optical neurons are linked together using optical wires or “waveguides” to construct a deep network of many “neuron layers” that resembles the human brain. Information flows across the network’s layers, with each step assisting in classifying the input image into one of the learned categories. The pictures organized by the chip in the study were hand-drawn, letter-like characters.

Artificial intelligence (AI) is used in various systems, from text prediction to medical diagnosis. Many AI systems are based on artificial neural networks, electrical analogs of biological neurons interconnected with a set of known data, such as photographs, and then used to detect or classify new data points inspired by the human brain.

The researchers’ chip, which is less than a square centimeter in size, can recognize and classify a picture in less than a millisecond without using a separate CPU or memory unit.


The image of the target item is initially created on an image sensor, such as the digital camera in a smartphone, in classic neural networks used for image identification. The image sensor subsequently turns light into electrical impulses, converted into binary data that can be processed, analyzed, stored, and classed using computer processors. Accelerating these abilities is critical for various applications, including facial recognition, automatically recognizing text in photographs, and assisting self-driving cars in spotting impediments.

While consumer-grade image classification technology in most applications can benefit from a digital chip that can execute billions of computations per second, more advanced image classification applications like identifying moving objects. Even the most sophisticated technology is being pushed to its limits by 3D object recognition and classification of microscopic cells in the body. The linear order of computing steps in a computer processor controlled by a clock-based timetable is now the speed limit of these technologies.

Penn Engineers have developed the first scalable chip that instantly classifies and recognizes photos to overcome this restriction. Professor of electrical and systems engineering, along with a postdoctoral fellow and a graduate student, have eliminated the four main time-consuming components of a traditional computer chip: optical to electrical signal conversion, the need to convert input data to binary format, a large memory module, and clock-based computations.

They did so by employing an optical deep neural network based on a 9.3 square millimeter chip to directly process light received from the object of interest.

I am consulting intern at MarktechPost. I am majoring in Mechanical Engineering at IIT Kanpur. My interest lies in the field of machining and Robotics. Besides, I have a keen interest in AI, ML, DL, and related areas. I am a tech enthusiast and passionate about new technologies and their real-life uses.

🚀 LLMWare Launches SLIMs: Small Specialized Function-Calling Models for Multi-Step Automation [Check out all the models]