TY - GEN
T1 - Exploiting Sparse Semantic HD Maps for Self-Driving Vehicle Localization
AU - Ma, Wei Chiu
AU - Urtasun, Raquel
AU - Tartavull, Ignacio
AU - Barsan, Ioan Andrei
AU - Wang, Shenlong
AU - Bai, Min
AU - Mattyus, Gellert
AU - Homayounfar, Namdar
AU - Lakshmikanth, Shrinidhi Kowshika
AU - Pokrovsky, Andrei
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/11
Y1 - 2019/11
N2 - In this paper we propose a novel semantic localization algorithm that exploits multiple sensors and has precision on the order of a few centimeters. Our approach does not require detailed knowledge about the appearance of the world, and our maps require orders of magnitude less storage than maps utilized by traditional geometry- and LiDAR intensity-based localizers. This is important as self-driving cars need to operate in large environments. Towards this goal, we formulate the problem in a Bayesian filtering framework, and exploit lanes, traffic signs, as well as vehicle dynamics to localize robustly with respect to a sparse semantic map. We validate the effectiveness of our method on a new highway dataset consisting of 312km of roads. Our experiments show that the proposed approach is able to achieve 0.05m lateral accuracy and 1.12m longitudinal accuracy on average while taking up only 0.3% of the storage required by previous LiDAR intensity-based approaches.
AB - In this paper we propose a novel semantic localization algorithm that exploits multiple sensors and has precision on the order of a few centimeters. Our approach does not require detailed knowledge about the appearance of the world, and our maps require orders of magnitude less storage than maps utilized by traditional geometry- and LiDAR intensity-based localizers. This is important as self-driving cars need to operate in large environments. Towards this goal, we formulate the problem in a Bayesian filtering framework, and exploit lanes, traffic signs, as well as vehicle dynamics to localize robustly with respect to a sparse semantic map. We validate the effectiveness of our method on a new highway dataset consisting of 312km of roads. Our experiments show that the proposed approach is able to achieve 0.05m lateral accuracy and 1.12m longitudinal accuracy on average while taking up only 0.3% of the storage required by previous LiDAR intensity-based approaches.
UR - http://www.scopus.com/inward/record.url?scp=85081163731&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081163731&partnerID=8YFLogxK
U2 - 10.1109/IROS40897.2019.8968122
DO - 10.1109/IROS40897.2019.8968122
M3 - Conference contribution
AN - SCOPUS:85081163731
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 5304
EP - 5311
BT - 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019
Y2 - 3 November 2019 through 8 November 2019
ER -