CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
Pyramid: Accelerating LLM Inference With Cross-Level Processing-in-Memory
Yan, Liang1,2; Lu, Xiaoyang3; Chen, Xiaoming1,2; Han, Yinhe1,2; Sun, Xian-He3
2025
发表期刊IEEE COMPUTER ARCHITECTURE LETTERS
ISSN1556-6056
卷号24期号:1页码:121-124
摘要Integrating processing-in-memory (PIM) with GPUs accelerates large language model (LLM) inference, but existing GPU-PIM systems encounter several challenges. While GPUs excel in large general matrix-matrix multiplications (GEMM), they struggle with small-scale operations better suited for PIM, which currently cannot handle them independently. Additionally, the computational demands of activation operations exceed the capabilities of current PIM technologies, leading to excessive data movement between the GPU and memory. PIM's potential for general matrix-vector multiplications (GEMV) is also limited by insufficient support for fine-grained parallelism. To address these issues, we propose Pyramid, a novel GPU-PIM system that optimizes PIM for LLM inference by strategically allocating cross-level computational resources within PIM to meet diverse needs and leveraging the strengths of both technologies. Evaluation results demonstrate that Pyramid outperforms existing systems like NeuPIM, AiM, and AttAcc by factors of 2.31x, 1.91x, and 1.72x, respectively.
关键词Graphics processing units Decoding Computational modeling Parallel processing Systolic arrays Computer architecture Table lookup Random access memory Interpolation Transformers Large language models Processing-in-memory
DOI10.1109/LCA.2025.3559738
收录类别SCI
语种英语
资助项目National Natural Science Foundation of China[62488101] ; National Natural Science Foundation of China[62495104] ; National Natural Science Foundation of China[62025404] ; Youth Innovation Promotion Association CAS
WOS研究方向Computer Science
WOS类目Computer Science, Hardware & Architecture
WOS记录号WOS:001480433800002
出版者IEEE COMPUTER SOC
引用统计
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/40639
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Chen, Xiaoming
作者单位1.Chinese Acad Sci, Inst Comp Technol, Intelligent Comp Syst, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Beijing 100190, Peoples R China
3.IIT, Dept Compute Sci, Chicago, IL 60616 USA
推荐引用方式
GB/T 7714
Yan, Liang,Lu, Xiaoyang,Chen, Xiaoming,et al. Pyramid: Accelerating LLM Inference With Cross-Level Processing-in-Memory[J]. IEEE COMPUTER ARCHITECTURE LETTERS,2025,24(1):121-124.
APA Yan, Liang,Lu, Xiaoyang,Chen, Xiaoming,Han, Yinhe,&Sun, Xian-He.(2025).Pyramid: Accelerating LLM Inference With Cross-Level Processing-in-Memory.IEEE COMPUTER ARCHITECTURE LETTERS,24(1),121-124.
MLA Yan, Liang,et al."Pyramid: Accelerating LLM Inference With Cross-Level Processing-in-Memory".IEEE COMPUTER ARCHITECTURE LETTERS 24.1(2025):121-124.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Yan, Liang]的文章
[Lu, Xiaoyang]的文章
[Chen, Xiaoming]的文章
百度学术
百度学术中相似的文章
[Yan, Liang]的文章
[Lu, Xiaoyang]的文章
[Chen, Xiaoming]的文章
必应学术
必应学术中相似的文章
[Yan, Liang]的文章
[Lu, Xiaoyang]的文章
[Chen, Xiaoming]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。