Benchmarking Deep Neural Network Inference Performance on Serverless Environments With MLPerf

Egileak: Unai Elordi Hidalgo Luis Unzueta Irurtia Jon Goenetxea Imaz Sergio Sanchez Carballido Oihana Otaegui Madurga Ignacio Arganda-Carreras

Data: 01.01.2021

IEEE Software


Abstract

We provide a novel decomposition methodology from the current MLPerf benchmark to the serverless function execution model. We have tested our approach in Amazon Lambda to benchmark the processing capabilities of OpenCV and OpenVINO inference engines.

BIB_text

@Article {
title = {Benchmarking Deep Neural Network Inference Performance on Serverless Environments With MLPerf},
journal = {IEEE Software},
pages = {81-87},
volume = {38},
keywds = {
Benchmark testing, FAA, Task analysis, Engines, Computer architecture, Throughput, Computational modeling
}
abstract = {

We provide a novel decomposition methodology from the current MLPerf benchmark to the serverless function execution model. We have tested our approach in Amazon Lambda to benchmark the processing capabilities of OpenCV and OpenVINO inference engines.


}
doi = {10.1109/MS.2020.3030199},
date = {2021-01-01},
}
Vicomtech

Gipuzkoako Zientzia eta Teknologia Parkea,
Mikeletegi Pasealekua 57,
20009 Donostia / San Sebastián (Espainia)

+(34) 943 309 230

Zorrotzaurreko Erribera 2, Deusto,
48014 Bilbo (Espainia)

close overlay

Jokaeraren araberako publizitateko cookieak beharrezkoak dira eduki hau kargatzeko

Onartu jokaeraren araberako publizitateko cookieak