CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
SPMGAE: Self-purified masked graph autoencoders release robust expression power
Song, Shuhan1,2; Li, Ping1,2; Dun, Ming1; Zhang, Yuan1; Cao, Huawei1,3; Ye, Xiaochun1
2025
发表期刊NEUROCOMPUTING
ISSN0925-2312
卷号611页码:14
摘要To tackle the scarcity of labeled graph data, graph self-supervised learning (SSL) has branched into two paradigms: Generative methods and Contrastive methods. Inspired by MAE and BERT in computer vision (CV) and natural language processing (NLP), masked graph autoencoders (MGAEs) are gaining popularity in the generative genre. However, prevailing MGAEs are mostly designed under the assumption that the data has high homophilic score and is out of adversarial distortion. When people deliberately improve the performance on homophilic graph datasets, they ignore a critical issue that both internal heterophily and artificial attack noise are quite common in the real world. Therefore, when data itself is highly heterophilic or confronted with attacks, they merely have no defensive capability. Especially under self-supervised conditions, it is much more difficult to detect internal heterophily and resist artificial attacks. In this paper, we propose a Self-Purified Masked Graph Autoencoder (SPMGAE) to make up for the shortcomings of prevailing MGAEs in terms of robustness. SPMGAE first utilizes a self-purified module to prune raw graph data and separate perturbation information. The purified graph provides a robust graph structure for the entire pre-training process. Next, the encoding module reuses perturbation information for auxiliary training to enhance robustness, while the decoding module reconstructs the effective graph data at a finer granularity. Extensive experiments on homophilic and heterophilic datasets attacked by various attack methods demonstrate SPMGAE has a considerable robust expressive ability. Especially on small datasets with large perturbations, the improvement of defensive performance could reaches 10%-25%.
关键词Graph neural networks Masked graph autoencoders Robustness Graph adversarial attacks
DOI10.1016/j.neucom.2024.128631
收录类别SCI
语种英语
资助项目National Key Research and Development Program[2023YFB4502305] ; Beijing Natural Science Foundation[4232036]
WOS研究方向Computer Science
WOS类目Computer Science, Artificial Intelligence
WOS记录号WOS:001327273600001
出版者ELSEVIER
引用统计
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/39572
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Cao, Huawei
作者单位1.Chinese Acad Sci, Inst Comp Technol, State Key Lab Processors, Beijing, Peoples R China
2.Univ Chinese Acad Sci, Beijing, Peoples R China
3.Zhongguancun Lab, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Song, Shuhan,Li, Ping,Dun, Ming,et al. SPMGAE: Self-purified masked graph autoencoders release robust expression power[J]. NEUROCOMPUTING,2025,611:14.
APA Song, Shuhan,Li, Ping,Dun, Ming,Zhang, Yuan,Cao, Huawei,&Ye, Xiaochun.(2025).SPMGAE: Self-purified masked graph autoencoders release robust expression power.NEUROCOMPUTING,611,14.
MLA Song, Shuhan,et al."SPMGAE: Self-purified masked graph autoencoders release robust expression power".NEUROCOMPUTING 611(2025):14.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Song, Shuhan]的文章
[Li, Ping]的文章
[Dun, Ming]的文章
百度学术
百度学术中相似的文章
[Song, Shuhan]的文章
[Li, Ping]的文章
[Dun, Ming]的文章
必应学术
必应学术中相似的文章
[Song, Shuhan]的文章
[Li, Ping]的文章
[Dun, Ming]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。