3D object detection from LiDAR data using distance dependent feature extraction

Egileak: Guus Engels Nerea Aranjuelo Ansa Ignacio Arganda Marcos Nieto Doncel Oihana Otaegui Madurga

Data: 02.05.2020


Abstract

This paper presents a new approach to 3D object detection that leverages properties of the LiDAR sensor. State-of-the-art detectors use network architectures based on assumptions that are valid for natural images, but LiDAR data is fundamentally different. The features that describe objects change when they are farther removed from the LiDAR. Most detectors use a shared filter kernel to extract features which do not take the range depended nature of LiDAR features into account.
To show this, the training data is split into two ranges. The first range exists of objects that have their center less than 25 meters removed from the LiDAR. The second range contains all objects farther than 25 meters away. Combining the results of these detectors, trained on subsets of the full dataset, outperforms the same network trained on the full dataset for both ranges and all difficulties on the KITTI benchmark. Additional research compares the effect of using different input features when compressing the point cloud to an image. Different input feature configurations have similar results which indicates that the network focuses more on the shape and structure of the objects and not as much on the exact values in the image. This work shows how 3D object detectors can be adjusted by taking into account that features change over distance in point cloud data. This work shows that by training separate networks for close-range objects and long-range objects, performance improves for all difficulties varying from 0.4% tot 3.3%.

BIB_text

@Article {
title = {3D object detection from LiDAR data using distance dependent feature extraction},
pages = {289-300},
keywds = {
LiDAR, 3D object detection, feature extraction, point cloud
}
abstract = {

This paper presents a new approach to 3D object detection that leverages properties of the LiDAR sensor. State-of-the-art detectors use network architectures based on assumptions that are valid for natural images, but LiDAR data is fundamentally different. The features that describe objects change when they are farther removed from the LiDAR. Most detectors use a shared filter kernel to extract features which do not take the range depended nature of LiDAR features into account.
To show this, the training data is split into two ranges. The first range exists of objects that have their center less than 25 meters removed from the LiDAR. The second range contains all objects farther than 25 meters away. Combining the results of these detectors, trained on subsets of the full dataset, outperforms the same network trained on the full dataset for both ranges and all difficulties on the KITTI benchmark. Additional research compares the effect of using different input features when compressing the point cloud to an image. Different input feature configurations have similar results which indicates that the network focuses more on the shape and structure of the objects and not as much on the exact values in the image. This work shows how 3D object detectors can be adjusted by taking into account that features change over distance in point cloud data. This work shows that by training separate networks for close-range objects and long-range objects, performance improves for all difficulties varying from 0.4% tot 3.3%.


}
isbn = {978-989758419-0},
date = {2020-05-02},
}
Vicomtech

Gipuzkoako Zientzia eta Teknologia Parkea,
Mikeletegi Pasealekua 57,
20009 Donostia / San Sebastián (Espainia)

+(34) 943 309 230

Zorrotzaurreko Erribera 2, Deusto,
48014 Bilbo (Espainia)

close overlay

Jokaeraren araberako publizitateko cookieak beharrezkoak dira eduki hau kargatzeko

Onartu jokaeraren araberako publizitateko cookieak