Research&Project

VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator

VINS-Mono is a real-time SLAM framework for Monocular Visual-Inertial Systems. It uses an optimization-based sliding window formulation for providing high-accuracy visual-inertial odometry. It features efficient IMU pre-integration with bias correction, automatic estimator initialization, online extrinsic calibration, failure detection and recovery, loop detection, and global pose graph optimization. VINS-Mono is primarily designed for state estimation and feedback control of autonomous drones, but it is also capable of providing accurate localization for AR applications. This code runs on Linux and is fully integrated with ROS.

open source code: https://github.com/HKUST-Aerial-Robotics/VINS-Mono

VINS-Mobile: Monocular Visual-Inertial State Estimator on Mobile Phones

VINS-Mobile is a real-time monocular visual-inertial state estimator on compatible iOS devices and provides localization services for augmented reality (AR) applications. It is also tested for state estimation and feedback control for autonomous drones.VINS-Mobile uses sliding window optimization-based formulation for providing high-accuracy visual-inertial odometry with automatic initialization and failure recovery. The accumulated odometry errors are corrected in real-time using global pose graph SLAM. An AR demonstration is provided to showcase its capability.

open source code: https://github.com/HKUST-Aerial-Robotics/VINS-Mobile

 

VINS-Mobile: Monocular Visual-Inertial State Estimator on Mobile Phones

VINS-Fusion is an optimization-based multi-sensor state estimator, which achieves accurate self-localization for autonomous applications (drones, cars, and AR/VR). VINS-Fusion is an extension of VINS-Mono, which supports multiple visual-inertial sensor types (mono camera + IMU, stereo cameras + IMU, even stereo cameras only). We also show a toy example of fusing VINS with GPS.

open source code: https://github.com/HKUST-Aerial-Robotics/VINS-Fusion

 

Stereo vision based semantic 3D object and ego-motion tracking for autonomous driving

A stereo vision-based approach for tracking the camera ego-motion and 3D semantic objects in dynamic autonomous driving scenarios.