Throughout the last decade, the application of artificial intelligence (AI) has grown at a rapid pace. This is partly due to developing graphics processing units and hardware accelerators for specific domains. CMOS (complementary metal-oxide-semiconductor) technology has enabled this expansion thus far.
Because of today’s evolving paradigm of “in-memory” (also known as “in-situ”) computing, more efficient AI applications can now be developed than ever before. The ability of computational memory to store data and conduct computations like multiplications and additions to that data is critical to this improved efficiency. Data no longer needs to be transferred between storage and computation units, which is a primary source of inefficiency in traditional AI systems.
To explain the solutions for computations and research, we have summarized a research article written on Ericcson.
As promising candidates for the fundamental building blocks of such computational memories, the Massachusetts Institute of Technology (MIT) has selected non-CMOS ‘lithionic’ memristive devices (‘lithionic’ comes from lithium-based oxides). Lithium is an element commonly found in lithium-ion batteries, but researchers at MIT are working on expanding its application beyond energy storage and processing.
Researching 6G networks and connected 6G devices, which are increasingly projected to rely on AI and machine learning, is the primary focus of this investigation (ML). When it comes to environmental sensing, for example, more and more sensor data will be generated and analyzed in the future. In addition to saving energy, this type of local data processing would help to protect sensor data privacy and promote user privacy.
Unlocking neuromorphic potential
The use of lithionic memristive devices for computer memory is still in the exploratory stage and is not without its difficulties. The use cases for lithionic memristive technology, the underlying memristive architectures, and approaches for obtaining desirable performance characteristics are being examined.
Litionic memristive technology, a new enabler for efficient neuromorphic systems, may emerge soon as there’s progress in neuromorphic computing.
Definition of a memristive device
One of the essential components in electronics is the memristor (memory resistor), which is implemented in a memristive device. One of the main qualities of an ideal memristor, known as a “pinched current-voltage hysteresis loop,” cannot be replicated by any memristive device. “Memristor” and “memristive device” are often used interchangeably.
As a “memory resistor,” you could think of an object with the ability to respond to external stimuli and retain that response. Despite their distinct composition, memristors are analogous to other non-volatile memory systems like Flash.
Using Ohm’s law to perform multiplication is an example of how memristors can be utilized for data storage and computation. Memristor data storage capacity varies widely and can be as high as eight bits per memristor, depending on various factors. The more bits per memristor, the better; nevertheless, there is a tradeoff between the number of bits and other factors, including switching speed, retention, and endurance.
One of the main advantages of developing memristors from materials other than flash memory is that they are more scalable than conventional memory technologies (e.g., down to 10 nm or even below that). Conversely, CMOS has already scaled down to below 10 nm, and high-volume devices may be found with chips as small as 5 nm. Moreover, memristors could offer improved operating properties, such as faster switching, longer endurance, and better scaling than Flash. Memristors have a more extended retention period and low power dissipation, making them ideal candidates for future computing systems.
Oxygen-ion memristor (left) and new Li-based memristor (right) are shown in the figure above.
A transition metal oxide is placed between two electrodes in the actual construction of a memristor (left). It is possible to grow or dissolve so-called conductive filaments within the oxide, thereby altering its resistance. Various factors influencing this procedure are the strength and direction of the electric field and ion mobility. The arrow pointing down from the top suggests a possible flow of current.
Oxygen, silver, or copper ions and conductive filaments provide the most significant obstacles to developing memristors that use these elements.
The growth and dissolution of conductive filaments are complicated processes still being studied. As a result, fine-tuning resistance is difficult. Non-idealities, including stochasticity, non-linearity, and asymmetry, contribute to memristors ‘noisy behavior. Adapting algorithms and hardware to account for noise is feasible through noise-resilient applications. Nevertheless, the noise level is prohibitive for applications requiring high-precision calculation.
Innovative material science solutions are required to meet these difficulties. Lithium-based oxides are among the novel materials being studied by MIT researchers. To modify the resistance of the oxide, these ‘lithionic’ memristors use processes called phase separation and metal-to-insulator transition. Lithium ions also have higher mobility than oxygen, silver, or copper ions, which may allow for micro- to nanosecond switching in some applications.
Although lithium’s use in battery technology is well-understood, nothing is known about lithium-based oxides’ use in memristive technology. The fabrication process of lithionic memristors has still to be perfected in terms of scalability and compatibility with CMOS. Furthermore, system-level analyses of latency, power, and area at the early stage can guide the creation of lithionic memristors.
Neuromorphic computing fills the void of separating conventional computers from the human brain. Neuromorphic computing has three main characteristics: excellent power efficiency, rapid environmental response, and high responsiveness. Before discussing how memristors (generally speaking, not only lithionic memristors) can enable neuromorphic computing, let’s go through each of these qualities.
1. Efficiency in power generation and use
Here’s a simple comparison to show the kind of power efficiency neuromorphic computing aims to achieve:
It is estimated that the human brain uses less than 20 watts of electricity to carry out sophisticated cognitive tasks, thousands of times less power than the most energy-efficient digital computer devices. After various assumptions and calculations, it is estimated that the brain’s volume could hold 22,000 modern system-on-chip devices (SoC). Furthermore, a single SoC can have a power budget comparable to the complete human brain. Every analysis necessitates data movement between these two realms, and this movement consumes an inordinate amount of power. In other words, the initial idea that neuromorphic computing takes from nature, specifically the brain’s synapses, is to co-locate data and processing (recall computational memory mentioned at the beginning of the blog post). This is an excellent feature if you’re running AI/ML models like deep neural networks with many parameters and demand vast amounts of memory and processing.
2. Quickness of response
There are 860 trillion synapses in the human brain’s 86 billion neurons, each linked to around 10,000 other neurons via synapses. Synaptic connections represent information. Therefore the brain executes ‘processing’ at the location where data is stored. Even for complicated cognitive tasks, fast inference and perception can be achieved by combining the above principles: local processing at distinct time scales and parallelism.
3. Adaptability under changing conditions
Continuous learning is essential for adaptability, which is accomplished by varying the strength of synaptic connections. The brain’s ability to learn and adapt quickly and effectively with only a few examples is one of its most beautiful features. Continuous learning in artificial neural networks necessitates highly energy-efficient hardware because training with a small number of samples is not yet a mature technique for usage in these networks. The following are some of the reasons why they might be helpful:
- Biochemical synapses have been compared to memristors in their dynamics (for instance, stochastic switching, short-term plasticity, and accumulative behavior).
- Because of their ability to operate as a computational memory, memristors can reduce the amount of data that must be moved around, allowing for higher power efficiency.
- Analog crossbar arrays, in which vast numbers of memristors are organized into stable structures, can be used as computational memory units to achieve high density. To achieve high on-chip density, high parallelism, and excellent power efficiency, large numbers of these units can be coupled and merged into a single neuromorphic chip.
Analog crossbar arrays based on neuromorphic devices are inspired by the human brain. Similar to how synapses weigh input signals from axons of one or more pre-synaptic neurons and then sum partial results via dendrites, memristors (also known as a crossbar array) w1-3 balance input signals from the crossbar array and then sum partial results.
Memristive technology has significant potential in developing biologically plausible spiking neural networks in hardware. On the other hand, Memristive technology is most closely matched by applications that perform a lot of multiplication and accumulation. Analog crossbar arrays are best suited to applications that do large amounts of such processes in parallel. Vector-matrix multiplication(VMM) is an important computational kernel. It is used extensively in artificial neural networks, hyperdimensional computing, and other applications.
Memristors link horizontal and vertical wires (referred to as “word lines”) in a two-dimensional array (called a “matrix”) (so-called bit lines). According to Ohm’s law, the current flowing from a word line to a bit line depends on the memristor’s conductance. Each memristor functions as a multiplier for the input voltage and its conductance in a crossbar array.
An analog crossbar array can be used to do vector-matrix multiplication. For clarity, only a 3×3 array is displayed (real arrays can be 512×512, 1024×1024, etc.). I1 = I11 + I21 + I31) and Kirchhoff’s rule (e.g., I1= I11 + I21 + I31) determine the output current vector based on the input voltage vector V, the memristor conductance value matrix G, and the crossbar array of memristor conductance values.
Neuromorphic computing employing lithionic memristors offers a wide range of fascinating research opportunities. With enormous difficulties and a creative strategy based on use cases, it’s a challenging but rewarding job.
Neuromorphic systems could greatly benefit from using lithionic memristors. Lithium-based materials have been the subject of battery research for decades. However, a fundamental understanding of constructing memristors with precisely adjustable conductance is still absent. Material compatibility with the CMOS process is yet another area that needs further investigation.
There is much room for innovation regarding hardware architectures that use memristors. Inference latency and accuracy are critical considerations at the “top” design space, where decisions are based on the application’s performance needs. Regarding the accuracy of digital-to-analog converters, the tradeoffs are not simple. When it comes to maximizing power and area efficiency, careful design decisions are required.
A suitable set of tools is needed to investigate such a design space. Simulating software rather than making physical prototypes may be more appropriate for most design exploration. Memristor-based hardware architectures will be utilized to test algorithms like artificial neural networks quickly.
The field of lithionic memristor research will be approached holistically, encompassing everything from memristive devices to neuromorphic systems and AI/ML applications.
This Article is based on the Ericsson's Research Article 'How lithionic memristors are set to advance neuromorphic computing and AI'. All Credit For This Research Goes To Researchers on This Project. Please Don't Forget To Join Our ML Subreddit
Shruti is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Kanpur, India.