Artificial intelligence may never be able to take the place of the human brain. The fact that the human brain adjusts when it learns anything new is proof of this notion.
Purdue University researchers have developed computer chips that can constantly rearrange themselves to take in new data, similar to how the brain works. This enables AI to continue to learn over time. The circuits on a computer chip do not change, unlike the brain, which constantly establishes new synaptic connections to facilitate learning. A circuit that a machine has used for years is identical to the circuit created for the factory’s engine. The goal is to build a platform for machines to learn throughout their lives.
There are stumbling blocks in making AI more portable, such as for autonomous vehicles or space robots that must make judgments on their own in remote locations. These machines would be more efficient if AI could be incorporated directly into hardware rather than running on software, as it currently does.
If a computer or machine inspired by the brain is to be made, the AI community must be prepared to constantly program, reprogram, and update the chip. The research team created a novel piece of hardware that can be reprogrammed on-demand using electrical pulses in this study. According to the researchers, this adaptability will allow the device to perform all of the operations required to create a brain-inspired computer.
Edging closer to building a brain in chip form:
The hardware is a tiny, rectangular device made of perovskite nickelate, a highly sensitive to hydrogen material. The device can rearrange a concentration of an analyte in a matter of nanoseconds by applying electrical pulses at diverse voltages, yielding states that the researchers discovered could be linked to related activities in the brain.
When there is more hydrogen towards the device’s center, it can operate as a neuron or a single nerve cell. The gadget acts as a synaptic junction, a bridge between neurons, which the brain employs to store memories in sophisticated neural circuits, with little hydrogen at that place.
The research team concluded that the intrinsic physics of this device produces a dynamic structure for an artificial neural network based on simulations of the experimental data. This neural network employs “reservoir computing,” which describes how different brain areas communicate and exchange data. This network is better at recognizing ECG patterns and digits than static networks.
Reservoir computing is a computational framework inspired by recurrent neural network concepts that integrate input signals into higher-dimensional computational spaces using the dynamics of a static, nonlinear system called a reservoir. A bare reading process is trained to read the configuration of the reservoir and map it to the desired output once the input signal is delivered into the reservoir, which is treated as a “black box.”
Since the reservoir dynamics are set, this approach has the advantage of just requiring training during the readout step. The second is that naturally accessible computational capacity, both classical and quantum mechanical, can be leveraged to lower the effective computational cost.
The study team demonstrated the device’s robustness. After a million cycles of programming, the device’s reconfiguration of all functions is observed to be unbelievably consistent. The research team believes that this can be immediately used in the semiconductor industry. The next phase will demonstrate these notions on large-scale test chips that will be used to construct a brain-inspired computer.