Camera Ego-Positioning Using Sensor Fusion and Complementary Method
Journal
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Journal Volume
12663 LNCS
Pages
291-304
Date Issued
2021
Author(s)
Abstract
Visual simultaneous localization and mapping (SLAM) is a common solution for camera ego-positioning. However, SLAM sometimes loses tracking, for instance due to fast camera motion or featureless or repetitive environments. To account for the limitations of visual SLAM, we use sensor fusion method to fuse the visual positioning results with inertial measurement unit (IMU) data based on filter-based, loosely-coupled sensor fusion methods, and further combines feature-based SLAM with direct SLAM via proposed complementary fusion to retain the advantages of both methods; i.e., we not only keep the accurate positioning of feature-based SLAM but also account for its difficulty with featureless scenes by direct SLAM. Experimental results show that the proposed complementary method improves the positioning accuracy of conventional vision-only SLAM and leads to more robust positioning results. ? 2021, Springer Nature Switzerland AG.
Subjects
Camera ego-positioning
Sensor fusion
Cameras
Motion tracking
Pattern recognition
Complementary methods
Inertial measurement unit
Loosely coupled
Positioning accuracy
Robust Positioning
Visual positioning
Visual simultaneous localization and mappings
Sensor data fusion
Type
conference paper