CSpace  > 中国科学院计算技术研究所期刊论文
Rethinking the Importance of Quantization Bias, Toward Full Low-Bit Training
Liu, Chang1,2,3; Zhang, Xishan1,2; Zhang, Rui1,2; Li, Ling3,4; Zhou, Shiyi2; Huang, Di1,2,3; Li, Zhen2; Du, Zidong1,2; Liu, Shaoli2; Chen, Tianshi2
2022
发表期刊IEEE TRANSACTIONS ON IMAGE PROCESSING
ISSN1057-7149
卷号31页码:7006-7019
摘要Quantization is a promising technique to reduce the computation and storage costs of DNNs. Low-bit ( $\leq8$ bits) precision training remains an open problem due to the difficulty of gradient quantization. In this paper, we find two long-standing misunderstandings of the bias of gradient quantization noise. First, the large bias of gradient quantization noise, instead of the variance, is the key factor of training accuracy loss. Second, the widely used stochastic rounding cannot solve the training crash problem caused by the gradient quantization bias in practice. Moreover, we find that the asymmetric distribution of gradients causes a large bias of gradient quantization noise. Based on our findings, we propose a novel adaptive piecewise quantization method to effectively limit the bias of gradient quantization noise. Accordingly, we propose a new data format, Piecewise Fixed Point (PWF), to present data after quantization. We apply our method to different applications including image classification, machine translation, optical character recognition, and text classification. We achieve approximately $1.9\sim 3.5\times $ speedup compared with full precision training with an accuracy loss of less than 0.5%. To the best of our knowledge, this is the first work to quantize gradients of all layers to 8 bits in both large-scale CNN and RNN training with negligible accuracy loss.
关键词Neural network acceleration low precision training quantization
DOI10.1109/TIP.2022.3216776
收录类别SCI
语种英语
资助项目National Key Research and Development Programof China[2017YFA0700902] ; National Key Research and Development Programof China[2017YFA0700903] ; NSF of China[61925208] ; NSF of China[61906179] ; NSF of China[62102399] ; NSF of China[61732020] ; NSF of China[U19B2019] ; Strategic Priority Research Program of Chinese Academy of Science[XDB32050200] ; Beijing Academy of Artificial Intelligence(BAAI) ; Beijing Nova Program of Science and Technology[Z191100001119093] ; CAS Project for Young Scientistsin Basic Research[YSBR-029] ; Youth InnovationPromotion Association
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Artificial Intelligence ; Engineering, Electrical & Electronic
WOS记录号WOS:000888975000003
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
引用统计
被引频次:5[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/20283
专题中国科学院计算技术研究所期刊论文
通讯作者Li, Ling
作者单位1.Chinese Acad Sci, Inst Comp Technol, SKL Comp Architecture, Beijing 100190, Peoples R China
2.Cambricon Technol, Beijing 100191, Peoples R China
3.Univ Chinese Acad Sci, Beijing 100049, Peoples R China
4.Chinese Acad Sci, Inst Software, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Liu, Chang,Zhang, Xishan,Zhang, Rui,et al. Rethinking the Importance of Quantization Bias, Toward Full Low-Bit Training[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2022,31:7006-7019.
APA Liu, Chang.,Zhang, Xishan.,Zhang, Rui.,Li, Ling.,Zhou, Shiyi.,...&Chen, Tianshi.(2022).Rethinking the Importance of Quantization Bias, Toward Full Low-Bit Training.IEEE TRANSACTIONS ON IMAGE PROCESSING,31,7006-7019.
MLA Liu, Chang,et al."Rethinking the Importance of Quantization Bias, Toward Full Low-Bit Training".IEEE TRANSACTIONS ON IMAGE PROCESSING 31(2022):7006-7019.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Liu, Chang]的文章
[Zhang, Xishan]的文章
[Zhang, Rui]的文章
百度学术
百度学术中相似的文章
[Liu, Chang]的文章
[Zhang, Xishan]的文章
[Zhang, Rui]的文章
必应学术
必应学术中相似的文章
[Liu, Chang]的文章
[Zhang, Xishan]的文章
[Zhang, Rui]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。