Toward distributed, global, deep learning using IoT devices
Date
2021-07-20Author
Sudharsan, Bharath
Patel, Pankesh
Breslin, John
Ali, Muhammad Intizar
Mitra, Karan
Dustdar, Schahram
Rana, Omer
Jayaraman, Prem Prakash
Ranjan, Rajiv
Metadata
Show full item recordUsage
This item's downloads: 33 (view details)
Cited 0 times in Scopus (view citations)
Recommended Citation
Sudharsan, Bharath, Patel, Pankesh, Breslin, John, Ali, Muhammad Intizar, Mitra, Karan, Dustdar, Schahram, Rana, Omer, Jayaraman, Prem Prakash, Ranjan, Rajiv. (2021). Toward Distributed, Global, Deep Learning Using IoT Devices. IEEE Internet Computing, 25(3), 6-12. doi:10.1109/MIC.2021.3053711
Published Version
Abstract
Deep learning (DL) using large scale, high-quality IoT datasets can be computationally expensive. Utilizing such datasets to produce a problem-solving model within a reasonable time frame requires a scalable distributed training platform/system. We present a novel approach where to train one DL model on the hardware of thousands of mid-sized IoT devices across the world, rather than the use of GPU cluster available within a data center. We analyze the scalability and model convergence of the subsequently generated model, identify three bottlenecks that are: high computational operations, time consuming dataset loading I/O, and the slow exchange of model gradients. To highlight research challenges for globally distributed DL training and classification, we consider a case study from the video data processing domain. A need for a two-step deep compression method, which increases the training speed and scalability of DL training processing, is also outlined. Our initial experimental validation shows that the proposed method is able to improve the tolerance of the distributed training process to varying internet bandwidth, latency, and Quality of Service metrics.