Institute of Computing Technology, Chinese Academy IR
Contrastive Learning of Person-Independent Representations for Facial Action Unit Detection | |
Li, Yong1; Shan, Shiguang2,3,4 | |
2023 | |
发表期刊 | IEEE TRANSACTIONS ON IMAGE PROCESSING |
ISSN | 1057-7149 |
卷号 | 32页码:3212-3225 |
摘要 | Facial action unit (AU) detection, aiming to classify AU present in the facial image, has long suffered from insufficient AU annotations. In this paper, we aim to mitigate this data scarcity issue by learning AU representations from a large number of unlabelled facial videos in a contrastive learning paradigm. We formulate the self-supervised AU representation learning signals in two-fold: 1) AU representation should be frame-wisely discriminative within a short video clip; 2) Facial frames sampled from different identities but show analogous facial AUs should have consistent AU representations. As to achieve these goals, we propose to contrastively learn the AU representation within a video clip and devise a cross-identity reconstruction mechanism to learn the person-independent representations. Specially, we adopt a margin-based temporal contrastive learning paradigm to perceive the temporal AU coherence and evolution characteristics within a clip that consists of consecutive input facial frames. Moreover, the cross-identity reconstruction mechanism facilitates pushing the faces from different identities but show analogous AUs close in the latent embedding space. Experimental results on three public AU datasets demonstrate that the learned AU representation is discriminative for AU detection. Our method outperforms other contrastive learning methods and significantly closes the performance gap between the self-supervised and supervised AU detection approaches. |
关键词 | Gold Videos Training Image reconstruction Feature extraction Faces Task analysis Facial action unit detection contrastive Learning self-supervised learning person-independent action unit detection |
DOI | 10.1109/TIP.2023.3279978 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Key Research and Development Program of China[2018AAA0102402] ; National Natural Science Foundation of China[62102180] ; Natural Science Foundation of Jiangsu Province[BK20210329] ; Shuangchuang Program of Jiangsu Province[JSSCBS20210210] |
WOS研究方向 | Computer Science ; Engineering |
WOS类目 | Computer Science, Artificial Intelligence ; Engineering, Electrical & Electronic |
WOS记录号 | WOS:001004183400002 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://119.78.100.204/handle/2XEOYT63/21214 |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Shan, Shiguang |
作者单位 | 1.Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Key Lab Intelligent Percept & Syst High Dimens Inf, Minist Educ, Nanjing 210094, Peoples R China 2.Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China 3.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100049, Peoples R China 4.Peng Cheng Lab, Shenzhen 518055, Peoples R China |
推荐引用方式 GB/T 7714 | Li, Yong,Shan, Shiguang. Contrastive Learning of Person-Independent Representations for Facial Action Unit Detection[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2023,32:3212-3225. |
APA | Li, Yong,&Shan, Shiguang.(2023).Contrastive Learning of Person-Independent Representations for Facial Action Unit Detection.IEEE TRANSACTIONS ON IMAGE PROCESSING,32,3212-3225. |
MLA | Li, Yong,et al."Contrastive Learning of Person-Independent Representations for Facial Action Unit Detection".IEEE TRANSACTIONS ON IMAGE PROCESSING 32(2023):3212-3225. |
条目包含的文件 | 条目无相关文件。 |
个性服务 |
推荐该条目 |
保存到收藏夹 |
查看访问统计 |
导出为Endnote文件 |
谷歌学术 |
谷歌学术中相似的文章 |
[Li, Yong]的文章 |
[Shan, Shiguang]的文章 |
百度学术 |
百度学术中相似的文章 |
[Li, Yong]的文章 |
[Shan, Shiguang]的文章 |
必应学术 |
必应学术中相似的文章 |
[Li, Yong]的文章 |
[Shan, Shiguang]的文章 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论