CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos
Chen, Yin1; Li, Jia1; Shan, Shiguang2,3; Wang, Meng1; Hong, Richang1
2025-04-01
发表期刊IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
ISSN1949-3045
卷号16期号:2页码:624-638
摘要Dynamic facial expression recognition (DFER) in the wild is still hindered by data limitations, e.g., insufficient quantity and diversity of pose, occlusion and illumination, as well as the inherent ambiguity of facial expressions. In contrast, static facial expression recognition (SFER) currently shows much higher performance and can benefit from more abundant high-quality training data. Moreover, the appearance features and dynamic dependencies of DFER remain largely unexplored. Recognizing the potential in leveraging SFER knowledge for DFER, we introduce a novel Static-to-Dynamic model (S2D) that leverages existing SFER knowledge and dynamic information implicitly encoded in extracted facial landmark-aware features, thereby significantly improving DFER performance. First, we build and train an image model for SFER, which incorporates a standard Vision Transformer (ViT) and Multi-View Complementary Prompters (MCPs) only. Then, we obtain our video model (i.e., S2D), for DFER, by inserting Temporal-Modeling Adapters (TMAs) into the image model. MCPs enhance facial expression features with landmark-aware features inferred by an off-the-shelf facial landmark detector. And the TMAs capture and model the relationships of dynamic changes in facial expressions, effectively extending the pre-trained image model for videos. Notably, MCPs and TMAs only increase a fraction of trainable parameters (less than +10%) to the original image model. Moreover, we present a novel Emotion-Anchors (i.e., reference samples for each emotion category) based Self-Distillation Loss to reduce the detrimental influence of ambiguous emotion labels, further enhancing our S2D. Experiments conducted on popular SFER and DFER datasets show that we have achieved a new state of the art.
关键词Adaptation models Videos Computational modeling Feature extraction Transformers Task analysis Face recognition Dynamic facial expression recognition emotion ambiguity model adaptation transfer learning
DOI10.1109/TAFFC.2024.3453443
收录类别SCI
语种英语
资助项目National Key Research and Development Program of China[2019YFA0706203] ; National Natural Science Foundation of China[62202139] ; University Synergy Innovation Program of Anhui Province[GXXT-2022-038]
WOS研究方向Computer Science
WOS类目Computer Science, Artificial Intelligence ; Computer Science, Cybernetics
WOS记录号WOS:001499580000033
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
引用统计
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/42358
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Li, Jia
作者单位1.Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei 230601, Peoples R China
2.Chinese Acad Sci, Key Lab Intelligent Informat Proc, Inst Comp Technol, Beijing 100190, Peoples R China
3.Univ Chinese Acad Sci, Beijing 100049, Peoples R China
推荐引用方式
GB/T 7714
Chen, Yin,Li, Jia,Shan, Shiguang,et al. From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos[J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING,2025,16(2):624-638.
APA Chen, Yin,Li, Jia,Shan, Shiguang,Wang, Meng,&Hong, Richang.(2025).From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos.IEEE TRANSACTIONS ON AFFECTIVE COMPUTING,16(2),624-638.
MLA Chen, Yin,et al."From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos".IEEE TRANSACTIONS ON AFFECTIVE COMPUTING 16.2(2025):624-638.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Chen, Yin]的文章
[Li, Jia]的文章
[Shan, Shiguang]的文章
百度学术
百度学术中相似的文章
[Chen, Yin]的文章
[Li, Jia]的文章
[Shan, Shiguang]的文章
必应学术
必应学术中相似的文章
[Chen, Yin]的文章
[Li, Jia]的文章
[Shan, Shiguang]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。