Institute of Computing Technology, Chinese Academy IR
A Systematic View of Model Leakage Risks in Deep Neural Network Systems | |
Hu, Xing1; Liang, Ling2; Chen, Xiaobing1; Deng, Lei3; Ji, Yu4,5; Ding, Yufei6; Du, Zidong1; Guo, Qi1; Sherwood, Tim6; Xie, Yuan2 | |
2022-12-01 | |
发表期刊 | IEEE TRANSACTIONS ON COMPUTERS |
ISSN | 0018-9340 |
卷号 | 71期号:12页码:3254-3267 |
摘要 | As deep neural networks (DNNs) continue to find applications in ever more domains, the exact nature of the neural network architecture becomes an increasingly sensitive subject, due to either intellectual property protection or risks of adversarial attacks. While prior work has explored aspects of the risk associated with model leakage, exactly which parts of the model are most sensitive and how one infers the full architecture of the DNN when nothing is known about the structure a priori are problems that have been left unexplored. In this paper we address this gap, first by presenting a schema for reasoning about model leakage holistically, and then by proposing and quantitatively evaluating DeepSniffer, a novel learning-based model extraction framework that uses no prior knowledge of the victim model. DeepSniffer is robust to architectural and system noises introduced by the complex memory hierarchy and diverse run-time system optimizations. Taking GPU platforms as a showcase, DeepSniffer performs model extraction by learning both the architecture-level execution features of kernels and the inter-layer temporal association information introduced by the common practice of DNN design. We demonstrate that DeepSniffer works experimentally in the context of an off-the-shelf Nvidia GPU platform running a variety of DNN models and that the extracted models significantly improve attempts at crafting adversarial inputs. The DeepSniffer project has been released in https://github.com/xinghu7788/DeepSniffer. |
关键词 | Domain-specific architecture deep learning security model security |
DOI | 10.1109/TC.2022.3148235 |
收录类别 | SCI |
语种 | 英语 |
WOS研究方向 | Computer Science ; Engineering |
WOS类目 | Computer Science, Hardware & Architecture ; Engineering, Electrical & Electronic |
WOS记录号 | WOS:000886309300016 |
出版者 | IEEE COMPUTER SOC |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://119.78.100.204/handle/2XEOYT63/20301 |
专题 | 中国科学院计算技术研究所期刊论文 |
通讯作者 | Hu, Xing |
作者单位 | 1.Chinese Acad Sci, State Key Lab Comp Architecture, Inst Comp Technol, Beijing 100190, Peoples R China 2.Univ Calif Santa Barbara, Dept Elect & Comp Engn, Oakland, CA 94607 USA 3.Tsinghua Univ, Dept Precis Instrument, Ctr Brain Inspired Comp Res, Beijing 100084, Peoples R China 4.Tsinghua Univ, Beijing 100084, Peoples R China 5.Univ Calif Santa Barbara, Santa Barbara, CA 93106 USA 6.Univ Calif Santa Barbara, Dept Comp Sci, Santa Barbara, CA 93106 USA |
推荐引用方式 GB/T 7714 | Hu, Xing,Liang, Ling,Chen, Xiaobing,et al. A Systematic View of Model Leakage Risks in Deep Neural Network Systems[J]. IEEE TRANSACTIONS ON COMPUTERS,2022,71(12):3254-3267. |
APA | Hu, Xing.,Liang, Ling.,Chen, Xiaobing.,Deng, Lei.,Ji, Yu.,...&Xie, Yuan.(2022).A Systematic View of Model Leakage Risks in Deep Neural Network Systems.IEEE TRANSACTIONS ON COMPUTERS,71(12),3254-3267. |
MLA | Hu, Xing,et al."A Systematic View of Model Leakage Risks in Deep Neural Network Systems".IEEE TRANSACTIONS ON COMPUTERS 71.12(2022):3254-3267. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论