Lane Detection Based Camera to Map Alignment Using Open-Source Map Data

Authors: Marcos Nieto Doncel Gorka Vélez Isasmendi Benedict Flade Julian Eggert

Date: 05.11.2018


Abstract

For accurate vehicle self-localization, many approaches rely on the match between sophisticated 3D map data and sensor information obtained from laser scanners or camera images. However, when depending on highly accurate map data, every small change in the environment has to be detected and the corresponding map section needs to be updated. As an alternative, we propose an approach which is able to provide map-relative lane-level localization without the restraint of requiring extensive sensor equipment, neither for generating the maps, nor for aligning map to sensor data. It uses freely available crowdsourced map data which is enhanced and stored in a graph-based relational local dynamic map (R-LDM). Based on rough position estimation, provided by Global Navigation Satellite Systems (GNSS) such as GPS or Galileo, we align visual information with map data that is dynamically queried from the R-LDM. This is done by comparing virtual 3D views (so-called candidates), created from projected map data, with lane geometry data, extracted from the image of a front facing camera. More specifically, we extract explicit lane marking information from the real-world view using a lane-detection algorithm that fits lane markings to a curvilinear model. The position correction relative to the initial guess is determined by best match search of the virtual view that fits best the processed real-world view. Evaluations performed on data recorded in The Netherlands show that our algorithm presents a promising approach to allow lane-level localization using state-of-the-art equipment and freely available map data.

BIB_text

@Article {
title = {Lane Detection Based Camera to Map Alignment Using Open-Source Map Data},
pages = {890-897},
keywds = {
automobiles;cameras;computer vision;geometry;image sensors;object detection;road safety;road vehicles;sensor fusion;traffic engineering computing;lane geometry data;lane-detection algorithm;map alignment;open-source map data;sensor information;map-relativ
}
abstract = {

For accurate vehicle self-localization, many approaches rely on the match between sophisticated 3D map data and sensor information obtained from laser scanners or camera images. However, when depending on highly accurate map data, every small change in the environment has to be detected and the corresponding map section needs to be updated. As an alternative, we propose an approach which is able to provide map-relative lane-level localization without the restraint of requiring extensive sensor equipment, neither for generating the maps, nor for aligning map to sensor data. It uses freely available crowdsourced map data which is enhanced and stored in a graph-based relational local dynamic map (R-LDM). Based on rough position estimation, provided by Global Navigation Satellite Systems (GNSS) such as GPS or Galileo, we align visual information with map data that is dynamically queried from the R-LDM. This is done by comparing virtual 3D views (so-called candidates), created from projected map data, with lane geometry data, extracted from the image of a front facing camera. More specifically, we extract explicit lane marking information from the real-world view using a lane-detection algorithm that fits lane markings to a curvilinear model. The position correction relative to the initial guess is determined by best match search of the virtual view that fits best the processed real-world view. Evaluations performed on data recorded in The Netherlands show that our algorithm presents a promising approach to allow lane-level localization using state-of-the-art equipment and freely available map data.


}
isbn = { 978-1-7281-0323-5},
doi = {10.1109/ITSC.2018.8569304},
date = {2018-11-05},
}
Vicomtech

Parque Científico y Tecnológico de Gipuzkoa,
Paseo Mikeletegi 57,
20009 Donostia / San Sebastián (Spain)

+(34) 943 309 230

close overlay