Abstract:In modern computer architectures, the performance of many memory-bound workloads (e.g., machine learning, graph processing, databases) is limited by the data movement bottleneck that emerges when transferring large amounts of data between the main memory and the central processing unit (CPU). Processing-in-memory is an emerging computing paradigm that aims to alleviate this data movement bottleneck by performing computation close to or within the memory units, where data resides. One example of a prevalent workload whose performance is bound by the data movement bottleneck is the training and inference process of artificial neural networks. In this work, we analyze the potential of modern general-purpose PiM architectures to accelerate neural networks. To this end, we selected the UPMEM PiM system, the first commercially available real-world general-purpose PiM architecture. We compared the implementation of multilayer perceptrons (MLPs) in PiM with a sequential baseline running on an Intel Xeon CPU. The UPMEM implementation achieves up to $259\times$ better performance for inference of large batch sizes when compared against the CPU that exploits the size of the available PiM memory. Additionally, two smaller MLPs were implemented using UPMEM's working SRAM (WRAM), a scratchpad memory, to evaluate their performance against a low-power Nvidia Jetson graphics processing unit (GPU), providing further insights into the efficiency of UPMEM's PiM for neural network inference. Results show that using WRAM achieves kernel execution times for MLP inference of under $3$ ms, which is within the same order of magnitude as low-power GPUs.
Abstract:Large Language Models (LLMs) have become essential in a variety of applications due to their advanced language understanding and generation capabilities. However, their computational and memory requirements pose significant challenges to traditional hardware architectures. Processing-in-Memory (PIM), which integrates computational units directly into memory chips, offers several advantages for LLM inference, including reduced data transfer bottlenecks and improved power efficiency. This paper introduces PIM-AI, a novel DDR5/LPDDR5 PIM architecture designed for LLM inference without modifying the memory controller or DDR/LPDDR memory PHY. We have developed a simulator to evaluate the performance of PIM-AI in various scenarios and demonstrate its significant advantages over conventional architectures. In cloud-based scenarios, PIM-AI reduces the 3-year TCO per queries-per-second by up to 6.94x compared to state-of-the-art GPUs, depending on the LLM model used. In mobile scenarios, PIM-AI achieves a 10- to 20-fold reduction in energy per token compared to state-of-the-art mobile SoCs, resulting in 25 to 45~\% more queries per second and 6.9x to 13.4x less energy per query, extending battery life and enabling more inferences per charge. These results highlight PIM-AI's potential to revolutionize LLM deployments, making them more efficient, scalable, and sustainable.