Benchmarking Deep Neural Network Inference Performance on Serverless Environments With MLPerf

Data: 01.01.2021

IEEE Software


Abstract

We provide a novel decomposition methodology from the current MLPerf benchmark to the serverless function execution model. We have tested our approach in Amazon Lambda to benchmark the processing capabilities of OpenCV and OpenVINO inference engines.

BIB_text

@Article {
title = {Benchmarking Deep Neural Network Inference Performance on Serverless Environments With MLPerf},
journal = {IEEE Software},
pages = {81-87},
volume = {38},
keywds = {
Benchmark testing, FAA, Task analysis, Engines, Computer architecture, Throughput, Computational modeling
}
abstract = {

We provide a novel decomposition methodology from the current MLPerf benchmark to the serverless function execution model. We have tested our approach in Amazon Lambda to benchmark the processing capabilities of OpenCV and OpenVINO inference engines.


}
doi = {10.1109/MS.2020.3030199},
date = {2021-01-01},
}
Vicomtech

Gipuzkoako Zientzia eta Teknologia Parkea,
Mikeletegi Pasealekua 57,
20009 Donostia / San Sebasti√°n (Espainia)

+(34) 943 309 230

Ensanche eraikina,
Zabalgune Plaza 11,
48009 Bilbo (Espainia)

close overlay