Institute of Computing Technology, Chinese Academy IR
Pyramid: Accelerating LLM Inference With Cross-Level Processing-in-Memory | |
Yan, Liang1,2; Lu, Xiaoyang3; Chen, Xiaoming1,2; Han, Yinhe1,2; Sun, Xian-He3 | |
2025 | |
发表期刊 | IEEE COMPUTER ARCHITECTURE LETTERS
![]() |
ISSN | 1556-6056 |
卷号 | 24期号:1页码:121-124 |
摘要 | Integrating processing-in-memory (PIM) with GPUs accelerates large language model (LLM) inference, but existing GPU-PIM systems encounter several challenges. While GPUs excel in large general matrix-matrix multiplications (GEMM), they struggle with small-scale operations better suited for PIM, which currently cannot handle them independently. Additionally, the computational demands of activation operations exceed the capabilities of current PIM technologies, leading to excessive data movement between the GPU and memory. PIM's potential for general matrix-vector multiplications (GEMV) is also limited by insufficient support for fine-grained parallelism. To address these issues, we propose Pyramid, a novel GPU-PIM system that optimizes PIM for LLM inference by strategically allocating cross-level computational resources within PIM to meet diverse needs and leveraging the strengths of both technologies. Evaluation results demonstrate that Pyramid outperforms existing systems like NeuPIM, AiM, and AttAcc by factors of 2.31x, 1.91x, and 1.72x, respectively. |
关键词 | Graphics processing units Decoding Computational modeling Parallel processing Systolic arrays Computer architecture Table lookup Random access memory Interpolation Transformers Large language models Processing-in-memory |
DOI | 10.1109/LCA.2025.3559738 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Natural Science Foundation of China[62488101] ; National Natural Science Foundation of China[62495104] ; National Natural Science Foundation of China[62025404] ; Youth Innovation Promotion Association CAS |
WOS研究方向 | Computer Science |
WOS类目 | Computer Science, Hardware & Architecture |
WOS记录号 | WOS:001480433800002 |
出版者 | IEEE COMPUTER SOC |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://119.78.100.204/handle/2XEOYT63/40639 |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Chen, Xiaoming |
作者单位 | 1.Chinese Acad Sci, Inst Comp Technol, Intelligent Comp Syst, Beijing 100190, Peoples R China 2.Univ Chinese Acad Sci, Beijing 100190, Peoples R China 3.IIT, Dept Compute Sci, Chicago, IL 60616 USA |
推荐引用方式 GB/T 7714 | Yan, Liang,Lu, Xiaoyang,Chen, Xiaoming,et al. Pyramid: Accelerating LLM Inference With Cross-Level Processing-in-Memory[J]. IEEE COMPUTER ARCHITECTURE LETTERS,2025,24(1):121-124. |
APA | Yan, Liang,Lu, Xiaoyang,Chen, Xiaoming,Han, Yinhe,&Sun, Xian-He.(2025).Pyramid: Accelerating LLM Inference With Cross-Level Processing-in-Memory.IEEE COMPUTER ARCHITECTURE LETTERS,24(1),121-124. |
MLA | Yan, Liang,et al."Pyramid: Accelerating LLM Inference With Cross-Level Processing-in-Memory".IEEE COMPUTER ARCHITECTURE LETTERS 24.1(2025):121-124. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论