**2. Research background**

This work aims to develop a novel self-learning paradigm by emulating associative memory learning of animals. Thus, the study is built upon the reverse engineering of brain function. In specific, the large-scale computational models of associative memory learning are implemented by a neuromorphic system. In this section, we first introduce the state-of-the-art development of neuromorphic computing and systems. Then, the mechanism of associative memory learning at both macroscopic and microscopic levels is analyzed.

#### **2.1 Neuromorphic system**

A neuromorphic system emulates nervous systems, such as human brains, aiming at implementing Artificial Intelligence [20–25]. Human brains have the capability of executing sophisticated missions in unbelievably ultra-low energy. The average power of human brains is as small as 20 watts [1]. In addition, unlike the training process required for Artificial Neural Networks (ANNs) using big data, the nervous systems can adjust their responses by constantly interacting with their surroundings. This learning process is referred to as associative memory learning [1]. These incredible capabilities of nervous systems are attributed to their parallelization, high degree of connectivity, adjustable network topology, the colocation of data memory computation, and spike-based information representation.

Human brains consist of billions of neurons and trillions of synapses forming a high-degree and three-dimensional neural network. Through this extraordinarily complex network, an individual neuron can communicate with more than ten thousand other neurons simultaneously. Within this complex neural network, neurons are mainly signal-processing units, and the synapse between neurons connects organs. As computing units, the neurons integrate the received spiking signals in their cell body and send another sequence of spiking signals to other neurons through synapses. The signal strength received by other neurons is depended on the connection strength of the synapses. The connection strength among neurons can be adjusted. This feature is named as synaptic plasticity [1, 26, 27]. In specific, the connection strength among neurons becomes strong if the presynaptic neuron and postsynaptic neuron fire together. This synaptic connection strength change inspired a learning paradigm known as Hebbian learning [28–31].

In addition, the computational units (neurons) and the memory units (synapses) are located in close proximity. This structure eliminates one of the biggest inefficiencies in the von Neumann architecture that separates computing units and memory at different locations. The physical separation leads to data needing to be constantly transferred back and forth between memory and central processing units (CPUs). Furthermore, neuromorphic systems use sparse and event-based computation, meaning that only a small percentage of the available computing resources are active for a given task, and they are only activated and consuming power as needed in response to present events. Neuromorphic computing attempts to exploit these useful properties by modeling the architecture, neuron and synaptic cells, and the way of learning observed in the brain, enabling a new era of computers and AI [32].

Neuromorphic systems utilize specialized neuromorphic chips with artificial neurons. These chips are generally used to operate spiking neural networks (SNNs), which encode the information with a sequence of spikes just like nervous systems. In an SNN, neurons communicate with each other with discrete "spike" signals. There are various types of neuromorphic chips, such as Intel's Loihi [33, 34]. Unlike traditional GPUs and CPUs built upon the von Neumann architecture operating on digital information, Loihi chips are specifically designed for neuromorphic computing and asynchronous SNNs. To date, two generations of Loihi chips have been released. The first generation of Loihi chip was revealed in 2017 [33, 34]. Loihi-1 chips consist of 130,000 electronic neurons and 130 million synapses at 128 neuromorphic cores. The advanced 14 nm process of Intel renders the area of the Loihi-1 chip as small as 60 mm2 . Loihi-1 chips implement the digital leaky-and-fire neurons located on 128 cores. At each core, the communication among neurons is organized in a mesh configuration. The synapses in Loihi-1 chips are fully configurable and further support weight-sharing and compression features. The plasticity of synapses can be manipulated with various biologically plausible learning rules, such as Hebbian rules, STDP, and reward-modulated rules [33, 34]. The firing behavior of neurons in Loihi chips is implemented when received spikes accumulate to a threshold value in a certain time; the neurons will fire off their own spikes to their connected neurons.

Loihi-1 chips are offered with several neuromorphic platforms providing distinct interfaces for integrating the Loihi-1 chip with other computer systems or Field-Programmable Gate Array (FPGA) devices. Kapoho bay includes 1–2 Loihi chips with a USB interface. Nahuku is a 32-chip Loihi board with a standard FPGA Mezzanine Card (FMC) connector. The FMC connector allows the Nahuku system to communicate with the Arria FPGA development board. Pohoiki Spring is a largescale Loihi chip with 100 million neurons equipped as a server for remote access. The second generation of the Loihi chips, namely Loihi-2, was introduced in late 2021 [35]. Loihi-2 is fabricated in Intel 4 process, previously referred to as 7 nm technology. Powered by this advanced technology, the area of the Loihi-2 has been reduced to 31 mm<sup>2</sup> from 60 mm<sup>2</sup> of the first generation Loihi chips. Unlike the rigid neuron models in the last generation of Loihi chips, Loihi-2 realizes fully programmable neuron models. In Loihi-2, the specific behavior of the neurons can be programmed with microcode instructions. The microcode instructions support basic bitwise and math operations that can be used to specify custom neuron models. The Loihi-2 chip is dedicatedly designed for neuromorphic computing and edge devices with parallel computations, achieving high computational and energy efficiency. The comparison between two generations of Loihi chips is summarized in **Table 1**.

*Implementation of Associative Memory Learning in Mobile Robots Using Neuromorphic… DOI: http://dx.doi.org/10.5772/intechopen.110364*


#### **Table 1.**

*Introduction to Loihi and Loihi 2 chips.*

#### **2.2 Associative memory learning**

Animals have the capability of memorizing different events if they occur at the same time or with a small time lag. The capability is referred to as associative memory [1]. Associative memory learning was first studied by Ivan Pavlov in the 1890s when he was studying salivation reflex actions in dogs [1]. During Pavlov's experiments, the dogs originally had a salivation reflex to the presence of food, instead of the sound of whistles. However, if these two signals were presented together several times, the dogs salivated even if they only listened to the sound of a whistle with no food provided. This means the dogs can memorize the sound of whistles as a sign of food [1, 6, 36] through a learning/memorizing process. Through a series of experiments, Pavlov concluded that dogs have the capability of associating two originally irrelevant signals together through a training process, which is referred to as associative memory learning later. In general, two types of stimuli exist in associative memory learning: unconditional stimuli (US) and conditional stimuli (CS). The unconditional stimuli evoke the response with no training required. On the contrary, conditional stimuli demand an associative learning process to acquire corresponding reactions. For instance, in Pavlov's experiments, the presence of food was the unconditional stimulus, and the sound of whistles was a conditional stimulus (CS). After dogs, further studies have demonstrated that associative memory learning is a self-learning paradigm of a large variety of animals such as rats, bats, and sea slugs [1].

The studies in neuroscience exhibit that signal pathway modification and synaptic plasticity are highly related to associative memory learning [1, 22]. In a nervous system, the shapes of the spiking signals are almost identical (spikes), whether the signals come from the sensation of light or hearing. Thus, neuroscientists hypothesize that brains distinguish these signals by the signal pathways they are traveling to rather than their shapes. This hypothesis is much more straightforward in invertebrates that have simple nervous systems. **Figure 1** illustrates part of the nervous system of Aplysia that has two signal pathways from siphon to the gill and from tail to the gill, separately.

With these two signal pathways, Aplysia can accomplish a simple version of associative memory learning by memorizing the touch on the tail and stimulus from the siphon. When the tail of an Aplysia is touched, its gill shrinks, demonstrating an unconditional signal pathway. On the contrary, the gill does not shrink if the siphon is cut, exhibiting a conditional signal pathway. By applying a touch to the tail and stimulus on the siphon at the same time several times, the gill motor neuron becomes more responsive to the touch on the siphon alone. At the cellular level, the concurrent stimuli on the siphon and tail lead to a spiking signal overlapping when the stimuli are

**Figure 1.** *Illustration of associative memory learning of Aplysia.*

applied at the same time, shown in **Figure 1**. As a result, the synaptic connection among neurons, from the siphon to the gill, becomes stronger than the original state. This means the signal pathway from the siphon to the gill becomes unimpeded from blocked. These experiments on Aplysia demonstrate two critical factors for associative memory learning: (1) signal pathway modification and (2) synaptic plasticity.

For more complicated animals, such as rats, the sensation signals are processed not in individual neurons but in a group of neurons. These groups of neurons are referred to as neural assemblies [29, 37–39]. For example, fear conditioning experiments in rats involve two types of stimuli: electric shock on the food and a sound as neutral stimuli. These two types of signals are processed at different neural regions: auditory thalamus and somatosensory thalamus. The experimental goal is to let the rats associate the neutral sound with undesired electric shock by applying these two stimuli at the same time. Thus, it is one type of associative memory learning scheme. The studies have strong experimental evidence showing that signal pathway modification potentially occurs in lateral nucleus because the output signals from the auditory thalamus and somatosensory thalamus converge at the lateral nucleus [1]. This hypothesizes that associative memory learning in higher animals is accomplished via the association of two, or several, neural assemblies together rather than individual neurons.
