TinyML benchmark: Executing fully connected neural networks on commodity microcontrollers
Date
2021-06-20Author
Sudharsan, Bharath
Salerno, Simone
Nguyen, Duc-Duy
Yahya, Muhammad
Wahid, Abdul
Yadav, Piyush
Breslin, John G.
Metadata
Show full item recordUsage
This item's downloads: 60 (view details)
Recommended Citation
Sudharsan, Bharath, Salerno, Simone, Nguyen, Duc-Duy, Yahya, Muhammad, Wahid, Abdul, Yadav, Piyush, & Breslin, John G. (2021). TinyML benchmark: Executing fully connected neural networks on commodity microcontrollers. Paper presented at the IEEE 7th World Forum on Internet of Things (WF-IoT 2021), New Orleans, Louisiana, USA, 20-24 June, DOI: 10.13025/rmkq-1966
Published Version
Abstract
Recent advancements in the field of ultra-low-power
machine learning (TinyML) promises to unlock an entirely new
class of edge applications. However, continued progress is restrained by the lack of benchmarking Machine Learning (ML)
models on TinyML hardware, which is fundamental to this
field reaching maturity. In this paper, we designed 3 types of
fully connected Neural Networks (NNs), trained each NN using
10 datasets (produces 30 NNs), and present the benchmark by
reporting the onboard model performance on 7 popular MCUboards (similar boards are used to design TinyML hardware).
We open-sourced and made the complete benchmark results freely
available online 1
to enable the TinyML community researchers
and developers to systematically compare, evaluate, and improve
various asp