CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
Utilizing GCN-Based Deep Learning for Road Extraction from Remote Sensing Images
Jiang, Yu1,2; Zhao, Jiasen3; Luo, Wei3; Guo, Bincheng1,2; An, Zhulin1,2; Xu, Yongjun1,2
2025-06-23
发表期刊SENSORS
卷号25期号:13页码:27
摘要The technology of road extraction serves as a crucial foundation for urban intelligent renewal and green sustainable development. Its outcomes can optimize transportation network planning, reduce resource waste, and enhance urban resilience. Deep learning-based approaches have demonstrated outstanding performance in road extraction, particularly excelling in complex scenarios. However, extracting roads from remote sensing data remains challenging due to several factors that limit accuracy: (1) Roads often share similar visual features with the background, such as rooftops and parking lots, leading to ambiguous inter-class distinctions; (2) Roads in complex environments, such as those occluded by shadows or trees, are difficult to detect. To address these issues, this paper proposes an improved model based on Graph Convolutional Networks (GCNs), named FR-SGCN (Hierarchical Depth-wise Separable Graph Convolutional Network Incorporating Graph Reasoning and Attention Mechanisms). The model is designed to enhance the precision and robustness of road extraction through intelligent techniques, thereby supporting precise planning of green infrastructure. First, high-dimensional features are extracted using ResNeXt, whose grouped convolution structure balances parameter efficiency and feature representation capability, significantly enhancing the expressiveness of the data. These high-dimensional features are then segmented, and enhanced channel and spatial features are obtained via attention mechanisms, effectively mitigating background interference and intra-class ambiguity. Subsequently, a hybrid adjacency matrix construction method is proposed, based on gradient operators and graph reasoning. This method integrates similarity and gradient information and employs graph convolution to capture the global contextual relationships among features. To validate the effectiveness of FR-SGCN, we conducted comparative experiments using 12 different methods on both a self-built dataset and a public dataset. The proposed model achieved the highest F1 score on both datasets. Visualization results from the experiments demonstrate that the model effectively extracts occluded roads and reduces the risk of redundant construction caused by data errors during urban renewal. This provides reliable technical support for smart cities and sustainable development.
关键词depthwise separable convolution graph convolution road extraction smart cities gradient operator graph reasoning
DOI10.3390/s25133915
收录类别SCI
语种英语
资助项目National Natural Science Foundation of China ; [62476264] ; [62406312]
WOS研究方向Chemistry ; Engineering ; Instruments & Instrumentation
WOS类目Chemistry, Analytical ; Engineering, Electrical & Electronic ; Instruments & Instrumentation
WOS记录号WOS:001527601600001
出版者MDPI
引用统计
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/42059
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Zhao, Jiasen
作者单位1.Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Beijing 100049, Peoples R China
3.North China Inst Aerosp Engn, Langfang 065000, Peoples R China
推荐引用方式
GB/T 7714
Jiang, Yu,Zhao, Jiasen,Luo, Wei,et al. Utilizing GCN-Based Deep Learning for Road Extraction from Remote Sensing Images[J]. SENSORS,2025,25(13):27.
APA Jiang, Yu,Zhao, Jiasen,Luo, Wei,Guo, Bincheng,An, Zhulin,&Xu, Yongjun.(2025).Utilizing GCN-Based Deep Learning for Road Extraction from Remote Sensing Images.SENSORS,25(13),27.
MLA Jiang, Yu,et al."Utilizing GCN-Based Deep Learning for Road Extraction from Remote Sensing Images".SENSORS 25.13(2025):27.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Jiang, Yu]的文章
[Zhao, Jiasen]的文章
[Luo, Wei]的文章
百度学术
百度学术中相似的文章
[Jiang, Yu]的文章
[Zhao, Jiasen]的文章
[Luo, Wei]的文章
必应学术
必应学术中相似的文章
[Jiang, Yu]的文章
[Zhao, Jiasen]的文章
[Luo, Wei]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。