CSpace  > 中国科学院计算技术研究所期刊论文  > 英文
Environmental-structure-perception-based adaptive pose fusion method for LiDAR-visual-inertial odometry
Zhao, Zixu1,2,3,4; Liu, Chang1,3,4; Yu, Wenyao1,2,3,4; Shi, Jinglin1; Zhang, Dalin1
2024-05-01
发表期刊INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
ISSN1729-8814
卷号21期号:3页码:20
摘要Light Detection and Ranging (LiDAR)-visual-inertial odometry can provide accurate poses for the localization of unmanned vehicles working in unknown environments in the absence of Global Positioning System (GPS). Since the quality of poses estimated by different sensors in environments with different structures fluctuates greatly, existing pose fusion models cannot guarantee stable performance of pose estimations in these environments, which brings great challenges to the pose fusion of LiDAR-visual-inertial odometry. This article proposes a novel environmental structure perception-based adaptive pose fusion method, which achieves the online optimization of the parameters in the pose fusion model of LiDAR-visual-inertial odometry by analyzing the complexity of environmental structure. Firstly, a novel quantitative perception method of environmental structure is proposed, and the visual bag-of-words vector and point cloud feature histogram are constructed to calculate the quantitative indicators describing the structural complexity of visual image and LiDAR point cloud of the surroundings, which can be used to predict and evaluate the pose quality from LiDAR/visual measurement models of poses. Then, based on the complexity of the environmental structure, two pose fusion strategies for two mainstream pose fusion models (Kalman filter and factor graph optimization) are proposed, which can adaptively fuse the poses estimated by LiDAR and vision online. Two state-of-the-art LiDAR-visual-inertial odometry systems are selected to deploy the proposed environmental structure perception-based adaptive pose fusion method, and extensive experiments are carried out on both open-source data sets and self-gathered data sets. The experimental results show that environmental structure perception-based adaptive pose fusion method can effectively perceive the changes in environmental structure and execute adaptive pose fusion, improving the accuracy of pose estimation of LiDAR-visual-inertial odometry in environments with changing structures.
关键词Adaptive pose fusion LiDAR-visual-inertial odometry environmental structure perception pose estimation of unmanned vehicle sensor fusion
DOI10.1177/17298806241248955
收录类别SCI
语种英语
资助项目National Key R&D Program of China[2022YFC3320800] ; Zhejiang Provincial Key RD Plan of China[2021C01040]
WOS研究方向Robotics
WOS类目Robotics
WOS记录号WOS:001216075600001
出版者SAGE PUBLICATIONS INC
引用统计
文献类型期刊论文
条目标识符http://119.78.100.204/handle/2XEOYT63/38985
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Zhao, Zixu
作者单位1.Chinese Acad Sci, Inst Comp Technol, Wireless Commun Technol Res Ctr, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Beijing, Peoples R China
3.Chinese Acad Sci, Inst Comp Technol, State Key Lab Processors, Beijing, Peoples R China
4.Beijing Key Lab Mobile Comp & Pervas Device, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Zhao, Zixu,Liu, Chang,Yu, Wenyao,et al. Environmental-structure-perception-based adaptive pose fusion method for LiDAR-visual-inertial odometry[J]. INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS,2024,21(3):20.
APA Zhao, Zixu,Liu, Chang,Yu, Wenyao,Shi, Jinglin,&Zhang, Dalin.(2024).Environmental-structure-perception-based adaptive pose fusion method for LiDAR-visual-inertial odometry.INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS,21(3),20.
MLA Zhao, Zixu,et al."Environmental-structure-perception-based adaptive pose fusion method for LiDAR-visual-inertial odometry".INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS 21.3(2024):20.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Zhao, Zixu]的文章
[Liu, Chang]的文章
[Yu, Wenyao]的文章
百度学术
百度学术中相似的文章
[Zhao, Zixu]的文章
[Liu, Chang]的文章
[Yu, Wenyao]的文章
必应学术
必应学术中相似的文章
[Zhao, Zixu]的文章
[Liu, Chang]的文章
[Yu, Wenyao]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。