**1. Introduction**

The explosive growth of data and the increasing scale and complexity of deep learning models have made traditional von Neumann architectures inefficient in terms of speed and energy for AI processing. As a result, novel computer architecture and hardware are gaining increasing attention due to their potential to break the "memory wall" constraints in traditional von Neumann architectures. Neuromorphic computing, which is inspired by the functioning of the human brain and now refers to the hardware implementation of artificial neural networks, has been extensively explored and has demonstrated great potential to revolutionize AI processing. By leveraging new nano-devices such as resistive memory (ReRAM), significant progress has been made in this area. Artificial synapses [1–12] and neurons [13–19], and corresponding neuromorphic circuits and hardware systems have been successfully investigated [11, 20–29].

The two-terminal ReRAM device forms a high-density crossbar structure, enabling highly efficient vector-matrix computation naturally [30]. It is worth noting that challenges still exist in applying the ReRAM crossbar to neural network implementation due to factors such as inherent sneak-path leakage, signal noise, and limited conductance states, which can degrade computing accuracy. To address these challenges, various hardware-software co-design methodologies have been proposed [31–41]. For example, researchers explored specialized circuits and algorithms to tolerate the sneak-path current and guarantee the computing accuracy [11, 25–29]. Hardware-adaptive neural network pruning was also been investigated to promote computational efficiency with reduced hardware design and computing energy costs. In addition, several works have focused on improving the computing reliability and security in ReRAM-based neuromorphic computing systems.

The ReRAM, along with its neuromorphic designs, has inspired the development of ReRAM-based in-memory processing accelerators. These accelerators, including PRIME [42], ISAAC [43], and PipeLayer [44], were designed to accelerate the training and inference of convolution neural networks (CNNs). Based on these fundamental explorations, researchers have designed accelerators for various neural networks and applications, such as ReRAM-based accelerators for GANs [45], RNNs [46, 47]. These accelerators demonstrated improved speed and energy efficiency compared to traditional computing platforms such as CPUs and GPUs. With the continuous advancement of ReRAM technology, the ReRAM-based neuromorphic engines are being applied in broader domains [48–61].

In this chapter, we will summarize recent research in ReRAM-based neuromorphic computing. The aforementioned research areas, including hardware implementation, hardware-software co-design, and accelerator architecture, will be covered.
