Benchmarking Deep Neural Network Inference Performance on Serverless Environments With MLPerf
Authors: Jon Goenetxea Imaz Sergio Sanchez Carballido Ignacio Arganda-Carreras
Date: 01.01.2021
IEEE Software
Abstract
We provide a novel decomposition methodology from the current MLPerf benchmark to the serverless function execution model. We have tested our approach in Amazon Lambda to benchmark the processing capabilities of OpenCV and OpenVINO inference engines.
BIB_text
title = {Benchmarking Deep Neural Network Inference Performance on Serverless Environments With MLPerf},
journal = {IEEE Software},
pages = {81-87},
volume = {38},
keywds = {
Benchmark testing, FAA, Task analysis, Engines, Computer architecture, Throughput, Computational modeling
}
abstract = {
We provide a novel decomposition methodology from the current MLPerf benchmark to the serverless function execution model. We have tested our approach in Amazon Lambda to benchmark the processing capabilities of OpenCV and OpenVINO inference engines.
}
doi = {10.1109/MS.2020.3030199},
date = {2021-01-01},
}