Inertial-aided vision-based localization and mapping in a riverine environment with reflection measurements

Junho Yang, Ashwin Dani, Soon Jo Chung, Seth Hutchinson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper presents an inertial-adided vision-based localization and mapping algorithm for an unmanned aerial vehicle (UAV) that can operate in a GPS-denied riverine environment. We take vision measurements from the features surrounding the river and their corresponding points reflected in the river. We apply a robot-centric mapping framework to let the uncertainty of the features be referenced to the UAV body frame and estimate the 3D postion of point features while estimating the location of the UAV. We demonstrate the localization and mapping results with sensors on our quadcopter UAV platform in the University of Illinois at Urbana Champaign Boneyard Creek. The UAV is equipped with a light weight monocular camera, an inertial measurement unit (IMU) which contains a three-axis magnetometer, an ultrasound altimeter, and an onboard computer. To our knowledge, we report the first result of performing localization and mapping by exploiting multiple views with reflections of features in a riverine environment.

Original languageEnglish (US)
Title of host publicationAIAA Guidance, Navigation, and Control (GNC) Conference
StatePublished - 2013
EventAIAA Guidance, Navigation, and Control (GNC) Conference - Boston, MA, United States
Duration: Aug 19 2013Aug 22 2013

Publication series

NameAIAA Guidance, Navigation, and Control (GNC) Conference

Other

OtherAIAA Guidance, Navigation, and Control (GNC) Conference
Country/TerritoryUnited States
CityBoston, MA
Period8/19/138/22/13

ASJC Scopus subject areas

  • Aerospace Engineering
  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Inertial-aided vision-based localization and mapping in a riverine environment with reflection measurements'. Together they form a unique fingerprint.

Cite this