CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
BitNet: 1-bit Pre-training for Large Language Models
Wang, Hongyu1,2; Ma, Shuming3; Ma, Lingxiao3; Wang, Lei4; Wang, Wenhui3; Dong, Li3; Huang, Shaohan3; Wang, Huaijie5; Xue, Jilong3; Wang, Ruiping1,2; Wu, Yi5; Wei, Furu3
2025
发表期刊JOURNAL OF MACHINE LEARNING RESEARCH
ISSN1532-4435
卷号26页码:29
摘要The increasing size of large language models (LLMs) has posed challenges for deployment and raised concerns about environmental impact due to high energy consumption. Previous research typically applies quantization after pre-training. While these methods avoid the need for model retraining, they often cause notable accuracy loss at extremely low bit-widths. In this work, we explore the feasibility and scalability of 1-bit pre-training. We introduce BitNet b1 and BitNet b1.58, the scalable and stable 1-bit Transformer architecture designed for LLMs. Specifically, we introduce BitLinear as a drop-in replacement of the nn.Linear layer in order to train 1-bit weights from scratch. Experimental results show that BitNet b1 achieves competitive performance, compared to state-of-the-art 8-bit quantization methods and FP16 Transformer baselines. With the ternary weight, BitNet b1.58 matches the half-precision Transformer LLM with the same model size and training tokens in terms of both perplexity and end-task performance, while being significantly more cost-effective in terms of latency, memory, throughput, and energy consumption. More profoundly, BitNet defines a new scaling law and recipe for training new generations of LLMs that are both high-performance and cost-effective. It enables a new computation paradigm and opens the door for designing specific hardware optimized for 1-bit LLMs.
关键词Natural Language Processing Large Language Models 1-bit Pre-training Efficiency Model Architecture
收录类别SCI
语种英语
WOS研究方向Automation & Control Systems ; Computer Science
WOS类目Automation & Control Systems ; Computer Science, Artificial Intelligence
WOS记录号WOS:001565772300001
出版者MICROTOME PUBL
引用统计
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/41745
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Wang, Hongyu
作者单位1.Chinese Acad Sci, Inst Comp Technol, Key Lab AI Safety Chinese Acad Sci CAS, Beijing, Peoples R China
2.Univ Chinese Acad Sci, Beijing, Peoples R China
3.Microsoft Res, Silverdale, WA USA
4.Univ Chinese Acad Sci, Beijing, Peoples R China
5.Tsinghua Univ, Inst Interdisciplinary Informat Sci, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Wang, Hongyu,Ma, Shuming,Ma, Lingxiao,et al. BitNet: 1-bit Pre-training for Large Language Models[J]. JOURNAL OF MACHINE LEARNING RESEARCH,2025,26:29.
APA Wang, Hongyu.,Ma, Shuming.,Ma, Lingxiao.,Wang, Lei.,Wang, Wenhui.,...&Wei, Furu.(2025).BitNet: 1-bit Pre-training for Large Language Models.JOURNAL OF MACHINE LEARNING RESEARCH,26,29.
MLA Wang, Hongyu,et al."BitNet: 1-bit Pre-training for Large Language Models".JOURNAL OF MACHINE LEARNING RESEARCH 26(2025):29.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Wang, Hongyu]的文章
[Ma, Shuming]的文章
[Ma, Lingxiao]的文章
百度学术
百度学术中相似的文章
[Wang, Hongyu]的文章
[Ma, Shuming]的文章
[Ma, Lingxiao]的文章
必应学术
必应学术中相似的文章
[Wang, Hongyu]的文章
[Ma, Shuming]的文章
[Ma, Lingxiao]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。