Institute of Computing Technology, Chinese Academy IR
| From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos | |
| Chen, Yin1; Li, Jia1; Shan, Shiguang2,3; Wang, Meng1; Hong, Richang1 | |
| 2025-04-01 | |
| 发表期刊 | IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
![]() |
| ISSN | 1949-3045 |
| 卷号 | 16期号:2页码:624-638 |
| 摘要 | Dynamic facial expression recognition (DFER) in the wild is still hindered by data limitations, e.g., insufficient quantity and diversity of pose, occlusion and illumination, as well as the inherent ambiguity of facial expressions. In contrast, static facial expression recognition (SFER) currently shows much higher performance and can benefit from more abundant high-quality training data. Moreover, the appearance features and dynamic dependencies of DFER remain largely unexplored. Recognizing the potential in leveraging SFER knowledge for DFER, we introduce a novel Static-to-Dynamic model (S2D) that leverages existing SFER knowledge and dynamic information implicitly encoded in extracted facial landmark-aware features, thereby significantly improving DFER performance. First, we build and train an image model for SFER, which incorporates a standard Vision Transformer (ViT) and Multi-View Complementary Prompters (MCPs) only. Then, we obtain our video model (i.e., S2D), for DFER, by inserting Temporal-Modeling Adapters (TMAs) into the image model. MCPs enhance facial expression features with landmark-aware features inferred by an off-the-shelf facial landmark detector. And the TMAs capture and model the relationships of dynamic changes in facial expressions, effectively extending the pre-trained image model for videos. Notably, MCPs and TMAs only increase a fraction of trainable parameters (less than +10%) to the original image model. Moreover, we present a novel Emotion-Anchors (i.e., reference samples for each emotion category) based Self-Distillation Loss to reduce the detrimental influence of ambiguous emotion labels, further enhancing our S2D. Experiments conducted on popular SFER and DFER datasets show that we have achieved a new state of the art. |
| 关键词 | Adaptation models Videos Computational modeling Feature extraction Transformers Task analysis Face recognition Dynamic facial expression recognition emotion ambiguity model adaptation transfer learning |
| DOI | 10.1109/TAFFC.2024.3453443 |
| 收录类别 | SCI |
| 语种 | 英语 |
| 资助项目 | National Key Research and Development Program of China[2019YFA0706203] ; National Natural Science Foundation of China[62202139] ; University Synergy Innovation Program of Anhui Province[GXXT-2022-038] |
| WOS研究方向 | Computer Science |
| WOS类目 | Computer Science, Artificial Intelligence ; Computer Science, Cybernetics |
| WOS记录号 | WOS:001499580000033 |
| 出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
| 引用统计 | |
| 文献类型 | 期刊论文 |
| 条目标识符 | http://119.78.100.204/handle/2XEOYT63/42358 |
| 专题 | 中国科学院计算技术研究所期刊论文_英文 |
| 通讯作者 | Li, Jia |
| 作者单位 | 1.Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei 230601, Peoples R China 2.Chinese Acad Sci, Key Lab Intelligent Informat Proc, Inst Comp Technol, Beijing 100190, Peoples R China 3.Univ Chinese Acad Sci, Beijing 100049, Peoples R China |
| 推荐引用方式 GB/T 7714 | Chen, Yin,Li, Jia,Shan, Shiguang,et al. From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos[J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING,2025,16(2):624-638. |
| APA | Chen, Yin,Li, Jia,Shan, Shiguang,Wang, Meng,&Hong, Richang.(2025).From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos.IEEE TRANSACTIONS ON AFFECTIVE COMPUTING,16(2),624-638. |
| MLA | Chen, Yin,et al."From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos".IEEE TRANSACTIONS ON AFFECTIVE COMPUTING 16.2(2025):624-638. |
| 条目包含的文件 | 条目无相关文件。 | |||||
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论