We present a customizable Collective Knowledge workflow to study the execution time vs. accuracy trade-offs for the MobileNets CNN family. We use this workflow to evaluate MobileNets on Arm Cortex CPUs using TensorFlow and Arm Mali GPUs using several versions of the Arm Compute Library. Our optimizations for the Arm Bifrost GPU architecture reduce the execution time by 2-3 times, while lying on a Pareto-optimal frontier. We also highlight the challenge of maintaining the accuracy when deploying CNN models across diverse platforms. We make all the workflow components (models, programs, scripts, etc.) publicly available to encourage further exploration by the community.
Multi-objective autotuning of mobile nets across the full software/hardware stack / Lokhmotov, A.; Vella, F.; Chunosov, N.; Fursin, G.. - ELETTRONICO. - (2018), p. 1. (Intervento presentato al convegno 1st ACM ReQuEST Workshop/Tournament on Reproducible Software/Hardware Co-Design of Pareto-Efficient Deep Learning, ReQuEST 2018 tenutosi a Williamsburg, VA, USA nel March 24th – March 28th 2018) [10.1145/3229762.3229767].
Multi-objective autotuning of mobile nets across the full software/hardware stack
Vella F.;
2018-01-01
Abstract
We present a customizable Collective Knowledge workflow to study the execution time vs. accuracy trade-offs for the MobileNets CNN family. We use this workflow to evaluate MobileNets on Arm Cortex CPUs using TensorFlow and Arm Mali GPUs using several versions of the Arm Compute Library. Our optimizations for the Arm Bifrost GPU architecture reduce the execution time by 2-3 times, while lying on a Pareto-optimal frontier. We also highlight the challenge of maintaining the accuracy when deploying CNN models across diverse platforms. We make all the workflow components (models, programs, scripts, etc.) publicly available to encourage further exploration by the community.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione