CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
Multi-Modal Deep Representation Learning Accurately Identifies and Interprets Drug-Target Interactions
Hu, Jiayue1; Liu, Yuhang2; Zeng, Xiangxiang3; Zou, Quan4; Su, Ran5; Wei, Leyi2
2025-07-01
发表期刊IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS
ISSN2168-2194
卷号29期号:7页码:5350-5360
摘要Deep learning offers efficient solutions for drug-target interaction prediction, but current methods often fail to capture the full complexity of multi-modal data (i.e., sequence, graphs, and three-dimensional structures), limiting both performance and generalization. Here, we present UnitedDTA, a novel explainable deep learning framework capable of integrating multi-modal biomolecule data to improve the binding affinity prediction, especially for novel (unseen) drugs and targets. UnitedDTA enables automatic learning unified discriminative representations from multi-modality data via contrastive learning and cross-attention mechanisms for cross-modality alignment and integration. Comparative results on multiple benchmark datasets show that UnitedDTA significantly outperforms the state-of-the-art drug-target affinity prediction methods and exhibits better generalization ability in predicting unseen drug-target pairs. More importantly, unlike most "black-box" deep learning methods, our well-established model offers better interpretability which enables us to directly infer the important substructures of the drug-target complexes that influence the binding activity, thus providing the insights in unveiling the binding preferences. Moreover, by extending UnitedDTA to other downstream tasks (e.g., molecular property prediction), we showcase the proposed multi-modal representation learning is capable of capturing the latent molecular representations that are closely associated with the molecular property, demonstrating the broad application potential for advancing the drug discovery process.
关键词Proteins Drugs Three-dimensional displays Training Feature extraction Data models Data mining Bioinformatics Deep learning Contrastive learning Drug-target interaction multi-modal learning molecular representation
DOI10.1109/JBHI.2025.3553217
收录类别SCI
语种英语
资助项目National Natural Science Foundation of China[62322112] ; Science and Technology Development Fund[0177/2023/RIA3]
WOS研究方向Computer Science ; Mathematical & Computational Biology ; Medical Informatics
WOS类目Computer Science, Information Systems ; Computer Science, Interdisciplinary Applications ; Mathematical & Computational Biology ; Medical Informatics
WOS记录号WOS:001523482700013
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
引用统计
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/42025
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Su, Ran; Wei, Leyi
作者单位1.Univ Chinese Acad Sci, Inst Comp Technol, Beijing 101408, Peoples R China
2.Macao Polytech Univ, Fac Appl Sci, Macau 999078, Peoples R China
3.Hunan Univ, Coll Comp Sci & Elect Engn, Changsha 410082, Peoples R China
4.Univ Elect Sci & Technol China, Inst Fundamental & Frontier Sci, Chengdu 610054, Peoples R China
5.Tianjin Univ, Coll Intelligence & Comp, Tianjin 300072, Peoples R China
推荐引用方式
GB/T 7714
Hu, Jiayue,Liu, Yuhang,Zeng, Xiangxiang,et al. Multi-Modal Deep Representation Learning Accurately Identifies and Interprets Drug-Target Interactions[J]. IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS,2025,29(7):5350-5360.
APA Hu, Jiayue,Liu, Yuhang,Zeng, Xiangxiang,Zou, Quan,Su, Ran,&Wei, Leyi.(2025).Multi-Modal Deep Representation Learning Accurately Identifies and Interprets Drug-Target Interactions.IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS,29(7),5350-5360.
MLA Hu, Jiayue,et al."Multi-Modal Deep Representation Learning Accurately Identifies and Interprets Drug-Target Interactions".IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS 29.7(2025):5350-5360.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Hu, Jiayue]的文章
[Liu, Yuhang]的文章
[Zeng, Xiangxiang]的文章
百度学术
百度学术中相似的文章
[Hu, Jiayue]的文章
[Liu, Yuhang]的文章
[Zeng, Xiangxiang]的文章
必应学术
必应学术中相似的文章
[Hu, Jiayue]的文章
[Liu, Yuhang]的文章
[Zeng, Xiangxiang]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。