随着船舶自主化在航运业的快速发展,《自主货物运输船舶指南》提出了针对自主靠泊场景的环境感知等相关技术要求。本文提出一种基于深度学习的三维感知算法,将多传感器的点云和图像数据进行深度融合,提升目标检测和分类的精度,用于寻找指定的靠泊目标。搭建基于虚拟物理引擎的船舶靠泊仿真系统,模拟靠泊场景,解决数据采集的难题。最后结合成熟的决策和控制算法,构建完整的自主靠泊系统,完成靠泊仿真试验,验证算法的有效性,对船舶自主靠泊应用具有重要意义。
With the rapid development of ship autonomy in the shipping industry, the " Guidelines for Autonomous Cargo Ships" proposed related technical requirements such as environment perception for autonomous berthing scenarios. This paper proposed a 3D perception algorithm based on deep learning, which used deep fusion of multi-sensor point clouds and image data to improve the accuracy of target detection and classification, and was used to find specified berthing targets. A ship berthing simulation system based on a virtual physical engine was built to simulate berthing scene and solve the problem of data collection. Finally, combined with mature guidance and control algorithms, a complete autonomous berthing system was constructed, and completed the berthing simulation experiment, which verified the effectiveness of the proposed algorithm. It had significant value for autonomous berthing application.
2021,43(3): 160-164 收稿日期:2019-12-23
DOI:10.3404/j.issn.1672-7649.2021.03.031
分类号:TP242.2
作者简介:谢涵清(1994-),男,硕士研究生,研究方向为无人船环境感知和自主靠泊
参考文献:
[1] GD20-2018. Guidelines for autonomous cargo ships[S].
[2] ZHANG L, XU J S, QIN C. Target detection and tracking system for unmanned surface vehicles[J]. Ship Engineering, 2018, 40(8): 59–63
[3] SUI D, CHEN W J. A marine visual image optimization method[J]. Ship Science and Technology, 2015(8): 161–164
[4] ZHAO G, XIAO X, YUAN J, et al. Fusion of 3D-LIDAR and camera data for scene parsing[J]. Journal of Visual Communication and Image Representation, 2014, 25(1): 165–183
[5] YUAN M X, ZHANG L M, ZHU Y S, et al. Ship target detection based on deep learning method[J]. Ship Science and Technology, 2019, 41(1): 111–115
[6] KU J, MOZIFIAN M, LEE J, et al. Joint 3D proposal generation and object detection from view aggregation[C]// Madrid: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 1–8.
[7] YANG B, LUO W, URTASUN R. Pixor: Real-time 3D object detection from point clouds[C]// Salt Lake City: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2018: 7652–7660.
[8] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]// Las Vegas: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016: 770–778.
[9] WANG S, SUO S, MA W C, et al. Deep parametric continuous convolutional neural networks[C]// Salt Lake City: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 2589–2597.
[10] FOSSEN T I. Handbook of marine craft hydrodynamics and motion control[M]. New Jersey: John Wiley & Sons, 2011.
[11] GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? the kitti vision benchmark suite[C]// Providence: 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012: 3354–3361.
[12] NGUYEN V S, DO V C, IM N K. Development of automatic ship berthing system using artificial neural network and distance measurement system[J]. International Journal of Fuzzy Logic and Intelligent Systems, 2018, 18(1): 41–49
[13] HOU C X, XU J S, YANG R W. Local path planning and path following control for inland water USV[J]. Ship Engineering, 2017, 39(6): 41–45