How Can Deep Neural Networks Be Generated Efficiently for Devices with Limited Resources?

Fecha: 12.07.2018


Abstract

Despite the increasing hardware capabilities of embedded devices, running a Deep Neural Network (DNN) in such systems remains a challenge. As the trend in DNNs is to design more complex architectures, the computation time in low-resource devices increases dramatically due to their low memory capabilities. Moreover, the physical memory used to store the network parameters augments with its complexity, hindering a feasible model to be deployed in the target hardware. Although a compressed model helps reducing RAM consumption, a large amount of consecutive deep layers increases the computation time. Despite the wide literature about DNN optimization, there is a lack of documentation for practical and efficient deployment of these networks. In this paper, we propose an efficient model generation by analyzing the parameters and their impact and address the design of a simple and comprehensive pipeline for optimal model deployment.

BIB_text

@Article {
title = {How Can Deep Neural Networks Be Generated Efficiently for Devices with Limited Resources?},
pages = {24-33},
keywds = {
Deep Compression Deep learning Computation efficiency
}
abstract = {

Despite the increasing hardware capabilities of embedded devices, running a Deep Neural Network (DNN) in such systems remains a challenge. As the trend in DNNs is to design more complex architectures, the computation time in low-resource devices increases dramatically due to their low memory capabilities. Moreover, the physical memory used to store the network parameters augments with its complexity, hindering a feasible model to be deployed in the target hardware. Although a compressed model helps reducing RAM consumption, a large amount of consecutive deep layers increases the computation time. Despite the wide literature about DNN optimization, there is a lack of documentation for practical and efficient deployment of these networks. In this paper, we propose an efficient model generation by analyzing the parameters and their impact and address the design of a simple and comprehensive pipeline for optimal model deployment.


}
isbn = {978-3-319-94543-9},
doi = {10.1007/978-3-319-94544-6_3},
date = {2018-07-12},
}
Vicomtech

Parque Científico y Tecnológico de Gipuzkoa,
Paseo Mikeletegi 57,
20009 Donostia / San Sebastián (España)

+(34) 943 309 230

Zorrotzaurreko Erribera 2, Deusto,
48014 Bilbao (España)

close overlay

Las cookies de publicidad comportamental son necesarias para cargar el contenido

Aceptar cookies de publicidad comportamental