Multimodal Deep Learning for Advanced Driving Systems

Date: 12.07.2018


Abstract

However, these approaches are limited to cover certain functionalities.
The potential of multimodal sensor fusion has been very little exploited, although research vehicles are commonly provided with various sensor types. How to combine their data to achieve a complex scene analysis and improve therefore robustness in driving is still an open questionr. While different surveys have been done for intelligent vehicles or deep learning, to date no survey on multimodal deep learning for advanced driving exists. This paper attempts to narrow this gap by providing the first review that analyzes existing literature and two indispensable elements: sensors and datasets. We also provide our insights on future challenges and work to be done.

BIB_text

@Article {
title = {Multimodal Deep Learning for Advanced Driving Systems},
pages = {95-105},
keywds = {
Autonomous Driving, ADAS, Deep Learning, Sensor Fusion
}
abstract = {

However, these approaches are limited to cover certain functionalities.
The potential of multimodal sensor fusion has been very little exploited, although research vehicles are commonly provided with various sensor types. How to combine their data to achieve a complex scene analysis and improve therefore robustness in driving is still an open questionr. While different surveys have been done for intelligent vehicles or deep learning, to date no survey on multimodal deep learning for advanced driving exists. This paper attempts to narrow this gap by providing the first review that analyzes existing literature and two indispensable elements: sensors and datasets. We also provide our insights on future challenges and work to be done.


}
isbn = {978-3-319-94543-9},
doi = {10.1007/978-3-319-94544-6_10},
date = {2018-07-12},
}
Vicomtech

Parque Científico y Tecnológico de Gipuzkoa,
Paseo Mikeletegi 57,
20009 Donostia / San Sebastián (Spain)

+(34) 943 309 230

close overlay