Vision-Enhanced Low-Cost Localization in Crowdsourced Maps

Autores: Benedict Flade Axel Koppert Gorka Vélez Isasmendi Anweshan Das David Betaille Gijs Dubbelman Oihana Otaegui Madurga Julian Eggert

Fecha: 12.06.2020

IEEE Intelligent Transportation Systems Magazine


Abstract

The lane-level localization of vehicles with low-cost sensors is a challenging task. In situations in which Global Navigation Satellite Systems (GNSSs) suffer from weak observation geometry or from the influence of reflected signals, the fusion of heterogeneous information presents a suitable approach for improving the localization accuracy. We propose a solution based on a monocular front-facing camera, a low-cost inertial measurement unit (IMU), and a single-frequency GNSS receiver. The sensor data fusion is implemented as a tightly coupled Kalman filter that corrects the IMU-based trajectory with GNSS observations while employing European Geostationary Overlay Service correction data. Further, we consider vision-based complementary data that serve as an additional source of information. In contrast to other approaches, the camera is not used to infer the motion of the vehicle, but rather for directly correcting the localization results under the usage of map information. More specifically, the so-called camera-to-map alignment is done by comparing virtual 3D views (candidates) created from projected map data with lane geometry features that are extracted from the camera image. One strength of the proposed solution is its compatibility with state-of-the-art map data, which are publicly available from different sources. We validate the approach on real-world data recorded in The Netherlands and show that it presents a promising and cost-efficient means to support future advanced driver assistance systems.

BIB_text

@Article {
title = {Vision-Enhanced Low-Cost Localization in Crowdsourced Maps},
journal = {IEEE Intelligent Transportation Systems Magazine},
pages = {70-80},
volume = {12},
keywds = {
Global navigation satellite system, Cameras, Sensors, Three-dimensional displays, Geometry, Receivers, Meters, Crowdsourcing
}
abstract = {

The lane-level localization of vehicles with low-cost sensors is a challenging task. In situations in which Global Navigation Satellite Systems (GNSSs) suffer from weak observation geometry or from the influence of reflected signals, the fusion of heterogeneous information presents a suitable approach for improving the localization accuracy. We propose a solution based on a monocular front-facing camera, a low-cost inertial measurement unit (IMU), and a single-frequency GNSS receiver. The sensor data fusion is implemented as a tightly coupled Kalman filter that corrects the IMU-based trajectory with GNSS observations while employing European Geostationary Overlay Service correction data. Further, we consider vision-based complementary data that serve as an additional source of information. In contrast to other approaches, the camera is not used to infer the motion of the vehicle, but rather for directly correcting the localization results under the usage of map information. More specifically, the so-called camera-to-map alignment is done by comparing virtual 3D views (candidates) created from projected map data with lane geometry features that are extracted from the camera image. One strength of the proposed solution is its compatibility with state-of-the-art map data, which are publicly available from different sources. We validate the approach on real-world data recorded in The Netherlands and show that it presents a promising and cost-efficient means to support future advanced driver assistance systems.


}
doi = {10.1109/MITS.2020.2994055},
date = {2020-06-12},
}
Vicomtech

Parque Científico y Tecnológico de Gipuzkoa,
Paseo Mikeletegi 57,
20009 Donostia / San Sebastián (España)

+(34) 943 309 230

Zorrotzaurreko Erribera 2, Deusto,
48014 Bilbao (España)

close overlay

Las cookies de publicidad comportamental son necesarias para cargar el contenido

Aceptar cookies de publicidad comportamental