Benchmarking Deep Neural Network Inference Performance on Serverless Environments With MLPerf

Autores: Unai Elordi Hidalgo Luis Unzueta Irurtia Jon Goenetxea Imaz Sergio Sanchez Carballido Oihana Otaegui Madurga Ignacio Arganda-Carreras

Fecha: 01.01.2021

IEEE Software


Abstract

We provide a novel decomposition methodology from the current MLPerf benchmark to the serverless function execution model. We have tested our approach in Amazon Lambda to benchmark the processing capabilities of OpenCV and OpenVINO inference engines.

BIB_text

@Article {
title = {Benchmarking Deep Neural Network Inference Performance on Serverless Environments With MLPerf},
journal = {IEEE Software},
pages = {81-87},
volume = {38},
keywds = {
Benchmark testing, FAA, Task analysis, Engines, Computer architecture, Throughput, Computational modeling
}
abstract = {

We provide a novel decomposition methodology from the current MLPerf benchmark to the serverless function execution model. We have tested our approach in Amazon Lambda to benchmark the processing capabilities of OpenCV and OpenVINO inference engines.


}
doi = {10.1109/MS.2020.3030199},
date = {2021-01-01},
}
Vicomtech

Parque Científico y Tecnológico de Gipuzkoa,
Paseo Mikeletegi 57,
20009 Donostia / San Sebastián (España)

+(34) 943 309 230

Zorrotzaurreko Erribera 2, Deusto,
48014 Bilbao (España)

close overlay

Las cookies de publicidad comportamental son necesarias para cargar el contenido

Aceptar cookies de publicidad comportamental