Vicomtech presents the latest developments applied to ADAS in the ITSWC 2018 Copenhagen 17-21 September
As in previous editions, Vicomtech will participate in the annual ITS World Congress, the most important ITS international event, which this year will be held in Copenhagen from 17th to 21st September.
For such a special event, the centre has designed a 27m2 booth in order to show the world its advanced technological demos such as the Driving Monitoring System, the Pedestrian Detection System, the Cloud-LSVA project results to analyse the scenes captured by on-board sensors in cars, and its camera and LIDAR data alignment system. The technologies used to develop these demos are based on Viulib, a set of precompiled libraries developed by Vicomtech that simplifies the building of complex computer vision, machine learning and artificial intelligence solutions.
In addition, Vicomtech will gather the Advisory Board of VI-DAS European Project in a private event within the ITSWC, to discuss with partners and different stakeholders the results achieved in the project.
The scientific and technological contribution of Vicomtech to the ITSWC will also be represented by Dr. Marcos Nieto who will give a scientific talk at SIS77 session, on “Automated vehicle data sharing enabled by feature extraction and anonymization” scheduled for the 20th September at 15:30.
The demos which visitors will have the opportunity to test in real time on site are the following:
Driver Monitoring System (DMS)
Road accidents continue to be a major public safety concern, and the main cause of the accidents is human error. Intelligent driver systems that can monitor the driver’s state and behaviour provide crucial information to understand the driving situation and identify potential hazards. Vicomtech develops different vision-based Driver Monitoring Systems (DMS) such as Driver Fatigue Warning and Gaze Estimation systems. These DMS are also key in the road towards semi-autonomous vehicles (L3) when used in combination with other systems that monitor the environment of the vehicle. Vicomtech works in this next-gen 720º (inside plus outside) connected Advanced Driver Assistance Systems within projects such as VI-DAS.
Pedestrian Detection is fundamental within Advanced Driver Assistance Systems (ADAS) that aim to enhance road safety. Pedestrian Detection systems must be reliable and robust in the vast variety of human appearances and poses, environment contexts and driving scenarios. Additionally, such systems must perform in real-time to maximize reaction times of the driver or autonomous vehicle. Vicomtech works on developing innovative vision-based solutions in this field, as the ones integrated in the AUTOPILOT project. The solutions are based on deep-learning technologies, and special focus is given to optimization for the deployment of the systems into on-vehicle platforms with limited computational resources.
Semi-automatic Annotation for ADAS
The development and testing of Advanced Driver Assistance Systems (ADAS) require a large quantity of data, captured by on-board sensors in vehicles, to be annotated. Such annotated data is essential for training the core algorithms, and it also constitutes the golden reference used to assess the performance of the systems. However, manually annotating huge volumes of data (video, LiDAR etc.) for training autonomous vehicles is a Herculean task and extremely resource-consuming.
With cloud-enabled video analysis technology, along with tools to fuse video with other data sources, these problems could easily be overcome. Vicomtech works in projects such as Cloud-LSVA (Cloud Large Scale Video Analysis) to provide automatic and semi-automatic annotation tools to better cope with the order of magnitude (petabytes) of data to be analysed.
Camera and LIDAR data alignment
Advance Driver Assistance Systems (ADAS) rely on data captured by sensors equipped in vehicles. Sensors can be of different nature, range and precision (cameras, RADAR, LiDAR…). For instance, concerning object detection, cameras can provide rich texture-based and color-based information, which LiDAR generally lacks. On the other hand, LiDAR can work in low visibility, such as at night or in moderate fog or rain. Furthermore, for the detection of the object position relative to the sensor, LiDAR can provide a much more accurate spatial coordinate estimation than a camera. As both camera and LiDAR have their advantages and disadvantages, when fusing them together, the ideal algorithm should fully utilize their advantages and eliminate their disadvantages. This is an example of the huge improvements that can be obtained by fusing the data from the different sensors instead of processing it separately. Vicomtech works on developing solutions for aligning 2D and 3D data in order to enhance the performance of ADAS.
The 2018 edition of the ITSWC offers the best scenario to test the demos of Vicomtech in real time and to learn more about the active role the centre plays in developing and applying the ITS technology of the future.