Institute of Computing Technology, Chinese Academy IR
STAM: A SpatioTemporal Attention Based Memory for Video Prediction | |
Chang, Zheng1,2,4; Zhang, Xinfeng3; Wang, Shanshe4; Ma, Siwei4; Gao, Wen4 | |
2023 | |
发表期刊 | IEEE TRANSACTIONS ON MULTIMEDIA |
ISSN | 1520-9210 |
卷号 | 25页码:2354-2367 |
摘要 | Video prediction has always been a very challenging problem in video representation learning due to the complexity in spatial structure and temporal variation. However, existing methods mainly predict videos by employing language-based memory structures from the traditional Long Short-Term Memories (LSTMs) or Gated Recurrent Units (GRUs), which may not be powerful enough to model the long-term dependencies in videos, consisting of much more complex spatiotemporal dynamics than sentences. In this paper, we propose a SpatioTemporal Attention based Memory (STAM), which can efficiently improve the long-term spatiotemporal memorizing capacity by incorporating the global spatiotemporal information in videos. In the temporal domain, the proposed STAM aims to observe temporal states from a wider temporal receptive field to capture accurate global motion information. In the spatial domain, the proposed STAM aims to jointly utilize both the high-level semantic spatial state and the low-level texture spatial states to model a more reliable global spatial representation for videos. In particular, the global spatiotemporal information is extracted with the help of an Efficient SpatioTemporal Attention Gate (ESTAG), which can adaptively apply different levels of attention scores to different spatiotemporal states according to their importance. Moreover, the proposed STAM are built with 3D convolutional layers due to their advantages in modeling spatiotemporal dynamics for videos. Experimental results show that the proposed STAM can achieve state-of-the-art performance on widely used datasets by leveraging the proposed spatiotemporal representations for videos. |
关键词 | Global spatiotemporal information spatio temporal receptive field 3D convolutional neural network spatiotemporal attention sequence learning video prediction |
DOI | 10.1109/TMM.2022.3146721 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Natural Science Foundation of China[62025101] ; National Natural Science Foundation of China[62072008] ; National Natural Science Foundation of China[62071449] ; National Natural Science Foundation of China[U20A20184] ; Fundamental Research Funds for the Central Universities ; High-performance Computing Platform of Peking University |
WOS研究方向 | Computer Science ; Telecommunications |
WOS类目 | Computer Science, Information Systems ; Computer Science, Software Engineering ; Telecommunications |
WOS记录号 | WOS:001007432100058 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://119.78.100.204/handle/2XEOYT63/21266 |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Ma, Siwei |
作者单位 | 1.Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China 2.Univ Chinese Acad Sci, Beijing 100190, Peoples R China 3.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100871, Peoples R China 4.Peking Univ, Natl Engn Lab Video Technol, Beijing 100871, Peoples R China |
推荐引用方式 GB/T 7714 | Chang, Zheng,Zhang, Xinfeng,Wang, Shanshe,et al. STAM: A SpatioTemporal Attention Based Memory for Video Prediction[J]. IEEE TRANSACTIONS ON MULTIMEDIA,2023,25:2354-2367. |
APA | Chang, Zheng,Zhang, Xinfeng,Wang, Shanshe,Ma, Siwei,&Gao, Wen.(2023).STAM: A SpatioTemporal Attention Based Memory for Video Prediction.IEEE TRANSACTIONS ON MULTIMEDIA,25,2354-2367. |
MLA | Chang, Zheng,et al."STAM: A SpatioTemporal Attention Based Memory for Video Prediction".IEEE TRANSACTIONS ON MULTIMEDIA 25(2023):2354-2367. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论