Benchmarking Deep Neural Network Inference Performance on Serverless Environments With MLPerf

Authors: Unai Elordi Hidalgo Luis Unzueta Irurtia Jon Goenetxea Imaz Sergio Sanchez Carballido Oihana Otaegui Madurga Ignacio Arganda-Carreras

Date: 01.01.2021

IEEE Software


Abstract

We provide a novel decomposition methodology from the current MLPerf benchmark to the serverless function execution model. We have tested our approach in Amazon Lambda to benchmark the processing capabilities of OpenCV and OpenVINO inference engines.

BIB_text

@Article {
title = {Benchmarking Deep Neural Network Inference Performance on Serverless Environments With MLPerf},
journal = {IEEE Software},
pages = {81-87},
volume = {38},
keywds = {
Benchmark testing, FAA, Task analysis, Engines, Computer architecture, Throughput, Computational modeling
}
abstract = {

We provide a novel decomposition methodology from the current MLPerf benchmark to the serverless function execution model. We have tested our approach in Amazon Lambda to benchmark the processing capabilities of OpenCV and OpenVINO inference engines.


}
doi = {10.1109/MS.2020.3030199},
date = {2021-01-01},
}
Vicomtech

Parque Científico y Tecnológico de Gipuzkoa,
Paseo Mikeletegi 57,
20009 Donostia / San Sebastián (Spain)

+(34) 943 309 230

Zorrotzaurreko Erribera 2, Deusto,
48014 Bilbao (Spain)

close overlay

Behavioral advertising cookies are necessary to load this content

Accept behavioral advertising cookies