CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
Downstream-Pretext Domain Knowledge Traceback for Active Learning
Zhang, Beichen1; Li, Liang2; Zha, Zheng-Jun3; Luo, Jiebo4; Huang, Qingming1
2024
发表期刊IEEE TRANSACTIONS ON MULTIMEDIA
ISSN1520-9210
卷号26页码:10585-10596
摘要Active learning (AL) is designed to construct a high-quality labeled dataset by iteratively selecting the most informative samples. Such sampling heavily relies on data representation, while recently pre-training is popular for robust feature learning. However, as pre-training utilizes low-level pretext tasks that lack annotation, directly using pre-trained representation in AL is inadequate for determining the sampling score. To address this problem, we propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance for selecting diverse and instructive samples near the decision boundary. DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator. The diversity indicator constructs two feature spaces based on the pre-training pretext model and the downstream knowledge from annotation, by which it locates the neighbors of unlabeled data from the downstream space in the pretext space to explore the interaction of samples. With this mechanism, DOKT unifies the data relations of low-level and high-level representations to estimate traceback diversity. Next, in the uncertainty estimator, domain mixing is designed to enforce perceptual perturbing to unlabeled samples with similar visual patches in the pretext space. Then the divergence of perturbed samples is measured to estimate the domain uncertainty. As a result, DOKT selects the most diverse and important samples based on these two modules. The experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods and generalizes well to various application scenarios such as semantic segmentation and image captioning.
关键词Task analysis Uncertainty Annotations Data models Training Visualization Transformers Active learning pretext training domain knowledge self-supervised learning
DOI10.1109/TMM.2024.3391897
收录类别SCI
语种英语
资助项目National Natural Science Foundation of China[62322211] ; National Natural Science Foundation of China[61931008] ; National Natural Science Foundation of China[62236008] ; National Natural Science Foundation of China[62336008] ; National Natural Science Foundation of China[U21B2038] ; National Natural Science Foundation of China[62225207] ; Key R&D Plan Project of Zhejiang Province[2024C01023]
WOS研究方向Computer Science ; Telecommunications
WOS类目Computer Science, Information Systems ; Computer Science, Software Engineering ; Telecommunications
WOS记录号WOS:001358607300007
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
引用统计
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/41111
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Li, Liang
作者单位1.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 101408, Peoples R China
2.Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China
3.Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230027, Peoples R China
4.Univ Rochester, Dept Comp Sci, Rochester, NY 14627 USA
推荐引用方式
GB/T 7714
Zhang, Beichen,Li, Liang,Zha, Zheng-Jun,et al. Downstream-Pretext Domain Knowledge Traceback for Active Learning[J]. IEEE TRANSACTIONS ON MULTIMEDIA,2024,26:10585-10596.
APA Zhang, Beichen,Li, Liang,Zha, Zheng-Jun,Luo, Jiebo,&Huang, Qingming.(2024).Downstream-Pretext Domain Knowledge Traceback for Active Learning.IEEE TRANSACTIONS ON MULTIMEDIA,26,10585-10596.
MLA Zhang, Beichen,et al."Downstream-Pretext Domain Knowledge Traceback for Active Learning".IEEE TRANSACTIONS ON MULTIMEDIA 26(2024):10585-10596.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Zhang, Beichen]的文章
[Li, Liang]的文章
[Zha, Zheng-Jun]的文章
百度学术
百度学术中相似的文章
[Zhang, Beichen]的文章
[Li, Liang]的文章
[Zha, Zheng-Jun]的文章
必应学术
必应学术中相似的文章
[Zhang, Beichen]的文章
[Li, Liang]的文章
[Zha, Zheng-Jun]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。