CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
Divide-and-Attention Network for HE-Stained Pathological Image Classification
Yan, Rui1,2; Yang, Zhidong1; Li, Jintao1; Zheng, Chunhou3; Zhang, Fa1
2022-07-01
发表期刊BIOLOGY-BASEL
卷号11期号:7页码:17
摘要Simple Summary We propose a Divide-and-Attention network that can learn representative pathological image features with respect to different tissue structures and adaptively focus on the most important ones. In addition, we introduce deep canonical correlation analysis constraints in the feature fusion process of different branches, so as to maximize the correlation of different branches and ensure that the fused branches emphasize specific tissue structures. Extensive experiments on three different pathological image datasets show that the proposed method achieved competitive results. Since pathological images have some distinct characteristics that are different from natural images, the direct application of a general convolutional neural network cannot achieve good classification performance, especially for fine-grained classification problems (such as pathological image grading). Inspired by the clinical experience that decomposing a pathological image into different components is beneficial for diagnosis, in this paper, we propose a Divide-and-Attention Network (DANet) for Hematoxylin-and-Eosin (HE)-stained pathological image classification. The DANet utilizes a deep-learning method to decompose a pathological image into nuclei and non-nuclei parts. With such decomposed pathological images, the DANet first performs feature learning independently in each branch, and then focuses on the most important feature representation through the branch selection attention module. In this way, the DANet can learn representative features with respect to different tissue structures and adaptively focus on the most important ones, thereby improving classification performance. In addition, we introduce deep canonical correlation analysis (DCCA) constraints in the feature fusion process of different branches. The DCCA constraints play the role of branch fusion attention, so as to maximize the correlation of different branches and ensure that the fused branches emphasize specific tissue structures. The experimental results of three datasets demonstrate the superiority of the DANet, with an average classification accuracy of 92.5% on breast cancer classification, 95.33% on colorectal cancer grading, and 91.6% on breast cancer grading tasks.
关键词pathological image classification attention mechanism convolutional neural network knowledge embedding
DOI10.3390/biology11070982
收录类别SCI
语种英语
资助项目Strategic Priority Research Program of the Chinese Academy of Sciences[XDA16021400] ; National Key Research and Development Program of China[2021YFF0704300] ; NSFC[61932018] ; NSFC[62072441] ; NSFC[62072280]
WOS研究方向Life Sciences & Biomedicine - Other Topics
WOS类目Biology
WOS记录号WOS:000831586600001
出版者MDPI
引用统计
被引频次:3[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/19485
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Zheng, Chunhou; Zhang, Fa
作者单位1.Chinese Acad Sci, Inst Comp Technol, High Performance Comp Res Ctr, Beijing 100045, Peoples R China
2.Univ Chinese Acad Sci, Beijing 101408, Peoples R China
3.Anhui Univ, Sch Artificial Intelligence, Hefei 230093, Peoples R China
推荐引用方式
GB/T 7714
Yan, Rui,Yang, Zhidong,Li, Jintao,et al. Divide-and-Attention Network for HE-Stained Pathological Image Classification[J]. BIOLOGY-BASEL,2022,11(7):17.
APA Yan, Rui,Yang, Zhidong,Li, Jintao,Zheng, Chunhou,&Zhang, Fa.(2022).Divide-and-Attention Network for HE-Stained Pathological Image Classification.BIOLOGY-BASEL,11(7),17.
MLA Yan, Rui,et al."Divide-and-Attention Network for HE-Stained Pathological Image Classification".BIOLOGY-BASEL 11.7(2022):17.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Yan, Rui]的文章
[Yang, Zhidong]的文章
[Li, Jintao]的文章
百度学术
百度学术中相似的文章
[Yan, Rui]的文章
[Yang, Zhidong]的文章
[Li, Jintao]的文章
必应学术
必应学术中相似的文章
[Yan, Rui]的文章
[Yang, Zhidong]的文章
[Li, Jintao]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。