Institute of Computing Technology, Chinese Academy IR
Downstream-Pretext Domain Knowledge Traceback for Active Learning | |
Zhang, Beichen1; Li, Liang2; Zha, Zheng-Jun3; Luo, Jiebo4; Huang, Qingming1 | |
2024 | |
发表期刊 | IEEE TRANSACTIONS ON MULTIMEDIA
![]() |
ISSN | 1520-9210 |
卷号 | 26页码:10585-10596 |
摘要 | Active learning (AL) is designed to construct a high-quality labeled dataset by iteratively selecting the most informative samples. Such sampling heavily relies on data representation, while recently pre-training is popular for robust feature learning. However, as pre-training utilizes low-level pretext tasks that lack annotation, directly using pre-trained representation in AL is inadequate for determining the sampling score. To address this problem, we propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance for selecting diverse and instructive samples near the decision boundary. DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator. The diversity indicator constructs two feature spaces based on the pre-training pretext model and the downstream knowledge from annotation, by which it locates the neighbors of unlabeled data from the downstream space in the pretext space to explore the interaction of samples. With this mechanism, DOKT unifies the data relations of low-level and high-level representations to estimate traceback diversity. Next, in the uncertainty estimator, domain mixing is designed to enforce perceptual perturbing to unlabeled samples with similar visual patches in the pretext space. Then the divergence of perturbed samples is measured to estimate the domain uncertainty. As a result, DOKT selects the most diverse and important samples based on these two modules. The experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods and generalizes well to various application scenarios such as semantic segmentation and image captioning. |
关键词 | Task analysis Uncertainty Annotations Data models Training Visualization Transformers Active learning pretext training domain knowledge self-supervised learning |
DOI | 10.1109/TMM.2024.3391897 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Natural Science Foundation of China[62322211] ; National Natural Science Foundation of China[61931008] ; National Natural Science Foundation of China[62236008] ; National Natural Science Foundation of China[62336008] ; National Natural Science Foundation of China[U21B2038] ; National Natural Science Foundation of China[62225207] ; Key R&D Plan Project of Zhejiang Province[2024C01023] |
WOS研究方向 | Computer Science ; Telecommunications |
WOS类目 | Computer Science, Information Systems ; Computer Science, Software Engineering ; Telecommunications |
WOS记录号 | WOS:001358607300007 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://119.78.100.204/handle/2XEOYT63/41111 |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Li, Liang |
作者单位 | 1.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 101408, Peoples R China 2.Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China 3.Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230027, Peoples R China 4.Univ Rochester, Dept Comp Sci, Rochester, NY 14627 USA |
推荐引用方式 GB/T 7714 | Zhang, Beichen,Li, Liang,Zha, Zheng-Jun,et al. Downstream-Pretext Domain Knowledge Traceback for Active Learning[J]. IEEE TRANSACTIONS ON MULTIMEDIA,2024,26:10585-10596. |
APA | Zhang, Beichen,Li, Liang,Zha, Zheng-Jun,Luo, Jiebo,&Huang, Qingming.(2024).Downstream-Pretext Domain Knowledge Traceback for Active Learning.IEEE TRANSACTIONS ON MULTIMEDIA,26,10585-10596. |
MLA | Zhang, Beichen,et al."Downstream-Pretext Domain Knowledge Traceback for Active Learning".IEEE TRANSACTIONS ON MULTIMEDIA 26(2024):10585-10596. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论