为将视觉惯性里程计(VIO)有效地应用到水下场景中,提出一种适用于自主水下机器人(AUV)执行近距离作业任务时的水下VIO。针对水下环境中缺乏角点以及存在大量重复特征的特点,该系统在视觉部分使用了基于直接法的数据关联方式,在特征点提取过程中将像素点的梯度模作为特征提取的标准;为保证提取出足够多且有效的特征点,采用动态调整特征点提取数量和提取阈值的方法。此外,通过最小化光度误差对视觉状态量进行估计。在AQUALOC数据集上的实验表明,本系统采用的直接法在比较恶劣的水下环境中获得了比ORB-SLAM3更高的定位精度,并可构造出相对稠密的地图。
In order to effectively apply visual-inertial odometry (VIO) to underwater scene, this paper proposes a visual-inertial odometry system which is suitable for autonomous underwater vehicle (AUV) to perform close range tasks. In view of the lack of corner points and the existence of a large number of repeated features in the underwater environment, a data association method based on direct method is used, which takes the gradient modulus of pixels as the standard of feature extraction. In order to extract enough effective feature points, the method of dynamically adjusting the number of feature points and the threshold of feature points extraction is adopted. In addition, the visual state is estimated by minimizing the photometric error. The experimental results on AQUALOC dataset demonstrate that the direct method adopted by this proposed system has higher localization accuracy than ORB-SLAM3 in a relatively harsh underwater environment, and can construct a relatively dense map.
2022,44(5): 65-69 收稿日期:2021-04-28
DOI:10.3404/j.issn.1672-7649.2022.05.013
分类号:TP242
基金项目:中国科学院A类战略性先导科技专项资助项目(XDA22040103)
作者简介:赵洪全(1991-),男,硕士研究生,研究方向为水下定位
参考文献:
[1] 黄琰, 李岩, 俞建成, 等. AUV智能化现状与发展趋势[J]. 机器人, 2020, 42(2): 215-231
[2] HUANG G Q. Visual-inertial navigation: a concise review[C]// IEEE International Conference on Robotics and Automation (ICRA). Montreal, QC, Canada: IEEE, 2019: 9572-9582.
[3] DURRANT W H, BAILEY T. Simultaneous localization and mapping: Part I[J]. IEEE Robotics & Automation Magazine, 2006, 13(2): 99-108
[4] 刘浩敏, 章国锋, 鲍虎军. 基于单目视觉的同时定位与地图构建方法综述[J]. 计算机辅助设计与图形学学报, 2016, 28(6): 855-868
[5] QIN T, LI P, SHEN S. VINS-Mono: a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, 2018, 34(4): 1004-1020
[6] CAMPOS C, ELVIRA R, RODRíGUEZ J, et al. ORB-SLAM3: An accurate open-source library for visual, visual-inertial and multi-map SLAM[EB/OL]. [2020-07-23]. http://arxiv.org/abs/2007.11898.
[7] CADENA C, CARLONE L, CARRILLO H, et al. Past, present, and future of simultaneous localization and mapping: toward the robust-perception age[J]. IEEE Transactions on Robotics, 2016, 32(6): 1309-1332
[8] VON STUMBERG L, USENKO V, CREMERS D. Direct sparse visual-inertial odometry using dynamic marginalization[C]// IEEE International Conference on Robotics and Automation. Piscataway, USA: IEEE, 2018: 2510-2517.
[9] ENGEL J, KOLTUN V, CREMERS D. Direct sparse odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(3): 611-625
[10] FERRERA M, CREUZE V, MORAS J, et al. AQUALOC: an underwater dataset for visual-inertial-pressure localization[J]. The International Journal of Robotics Research, 2019, 38(14): 1549-1559
[11] STURM J, ENGELHARD N, ENDRES F, et al. A benchmark for the evaluation of RGB-D SLAM systems[C]// IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway, USA: IEEE, 2012: 573-580.