CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
STAM: A SpatioTemporal Attention Based Memory for Video Prediction
Chang, Zheng1,2,4; Zhang, Xinfeng3; Wang, Shanshe4; Ma, Siwei4; Gao, Wen4
2023
发表期刊IEEE TRANSACTIONS ON MULTIMEDIA
ISSN1520-9210
卷号25页码:2354-2367
摘要Video prediction has always been a very challenging problem in video representation learning due to the complexity in spatial structure and temporal variation. However, existing methods mainly predict videos by employing language-based memory structures from the traditional Long Short-Term Memories (LSTMs) or Gated Recurrent Units (GRUs), which may not be powerful enough to model the long-term dependencies in videos, consisting of much more complex spatiotemporal dynamics than sentences. In this paper, we propose a SpatioTemporal Attention based Memory (STAM), which can efficiently improve the long-term spatiotemporal memorizing capacity by incorporating the global spatiotemporal information in videos. In the temporal domain, the proposed STAM aims to observe temporal states from a wider temporal receptive field to capture accurate global motion information. In the spatial domain, the proposed STAM aims to jointly utilize both the high-level semantic spatial state and the low-level texture spatial states to model a more reliable global spatial representation for videos. In particular, the global spatiotemporal information is extracted with the help of an Efficient SpatioTemporal Attention Gate (ESTAG), which can adaptively apply different levels of attention scores to different spatiotemporal states according to their importance. Moreover, the proposed STAM are built with 3D convolutional layers due to their advantages in modeling spatiotemporal dynamics for videos. Experimental results show that the proposed STAM can achieve state-of-the-art performance on widely used datasets by leveraging the proposed spatiotemporal representations for videos.
关键词Global spatiotemporal information spatio temporal receptive field 3D convolutional neural network spatiotemporal attention sequence learning video prediction
DOI10.1109/TMM.2022.3146721
收录类别SCI
语种英语
资助项目National Natural Science Foundation of China[62025101] ; National Natural Science Foundation of China[62072008] ; National Natural Science Foundation of China[62071449] ; National Natural Science Foundation of China[U20A20184] ; Fundamental Research Funds for the Central Universities ; High-performance Computing Platform of Peking University
WOS研究方向Computer Science ; Telecommunications
WOS类目Computer Science, Information Systems ; Computer Science, Software Engineering ; Telecommunications
WOS记录号WOS:001007432100058
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
引用统计
被引频次:7[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/21266
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Ma, Siwei
作者单位1.Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Beijing 100190, Peoples R China
3.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100871, Peoples R China
4.Peking Univ, Natl Engn Lab Video Technol, Beijing 100871, Peoples R China
推荐引用方式
GB/T 7714
Chang, Zheng,Zhang, Xinfeng,Wang, Shanshe,et al. STAM: A SpatioTemporal Attention Based Memory for Video Prediction[J]. IEEE TRANSACTIONS ON MULTIMEDIA,2023,25:2354-2367.
APA Chang, Zheng,Zhang, Xinfeng,Wang, Shanshe,Ma, Siwei,&Gao, Wen.(2023).STAM: A SpatioTemporal Attention Based Memory for Video Prediction.IEEE TRANSACTIONS ON MULTIMEDIA,25,2354-2367.
MLA Chang, Zheng,et al."STAM: A SpatioTemporal Attention Based Memory for Video Prediction".IEEE TRANSACTIONS ON MULTIMEDIA 25(2023):2354-2367.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Chang, Zheng]的文章
[Zhang, Xinfeng]的文章
[Wang, Shanshe]的文章
百度学术
百度学术中相似的文章
[Chang, Zheng]的文章
[Zhang, Xinfeng]的文章
[Wang, Shanshe]的文章
必应学术
必应学术中相似的文章
[Chang, Zheng]的文章
[Zhang, Xinfeng]的文章
[Wang, Shanshe]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。