**1. Introduction**

The increasing adoption of devices for the internet of things (IoT) combined with the diffusion of data-driven computing approaches, such as artificial neural networks [1], is promoting innovation in multiple sectors. Thus, to sustain the diffusion of these technologies, a constant research effort is being directed to the development of ultralow power computing devices and architectures that will enable the execution of complex computations directly on low-power devices at the edge of the network, reducing the burden on the cloud computing infrastructure. In-memory computing (IMC) architectures based on new nanoelectronics devices, are considered a key enabling technology. In these architectures, computations are executed directly inside or in the proximity of the memory array, removing the main source of inefficiencies, i.e., the von Neumann bottleneck (VNB), which is caused by the time and energy hungry data transfer between the memory and the processing unit. Among novel nonvolatile memory devices, resistive random access memory (RRAM) technologies are one of the most mature and are considered as the frontrunners for enabling the diffused adoption of IMC accelerators, thanks to their low-cost, back end of line compatibility, high-density, high switching speed, and relatively low programming voltages. Still, RRAM technologies present several nonideal effects (e.g., cycle-tocycle (C2C) and device-to-device variations, and random telegraph noise (RTN)) which can negatively impact the reliability of RRAM-based circuits and limit the number of bits that can be reliably stored in a single device.

Thus, to address these nonideal effects appropriate design methodologies need to be followed, and IMC accelerators in which RRAM devices are used as binary elements preferred. Among these IMC accelerators, logic-in-memory (LIM) [2–5] and mixed-signal binarized neural networks (BNN) inference accelerators [6–8] are promising solutions for resource-constrained edge devices. LIM accelerators enable the in-memory computation of logic operations. BNNs [9] are neural networks in which neurons' weights and activations are encoded using a single bit, and 2D RRAM arrays can be used to compute vector-matrix multiplication (VMM) operations (i.e., the most executed computation in neural networks) in the analog domain, achieving high energy efficiency and performance.

In this chapter, analysis and design methodologies enabled by an RRAM physicsbased compact model, are discussed and applied to LIM and mixed-signal BNN inference accelerators. As a use case example, the UNIMORE RRAM physics-based compact model [10, 11] calibrated on an RRAM technology from the literature [12], is used to extract performance vs. reliability trade-offs of different IMC accelerators: (i) a LIM accelerator based on the material implication logic, (ii) a mixed-signal BNN accelerator, and (iii) a hybrid accelerator enabling the coexistence of both computing paradigms on the same array. Finally, the performance of the three accelerators on a BNN inference task are compared and benchmarked with a state-of-the-art solution.
