|A collaborative CPU-GPU approach for deep learning on mobile devices
|deep learning; energy efficient; GPGPU; heterogeneous system; mobile computing; OpenCL
As mobile devices become more prevalent, users tend to reassess their expectations regarding the personalization of mobile services. The data collected by a mobile device's sensors provide an opportunity to gain insight into the user's profile. Recently, deep learning has gained momentum and has become the method of choice for solving machine learning problems. Interestingly, training a deep neural network on a mobile device is often mistakenly regarded as cumbersome. For instance, several deep learning frameworks only provide a CPU-based implementation for prediction tasks on a mobile device. In contrast to servers, a mobile computing environment imposes many domain-specific constraints that invite us to review the general computing approach used in a deep learning framework implementation. In this paper, we propose a deep learning framework that has been specifically designed for mobile device platforms. Our approach relies on the collaboration of the multicore CPU and the integrated GPU to accelerate deep learning computation on mobile devices. Our work exploits the shared memory architecture of mobile devices to promote CPU-GPU collaboration without any data copying. We analyze our approach with regard to three factors: performance/portability trade-off, power efficiency, and memory management. © 2019 John Wiley & Sons, Ltd.
|Deep neural networks; Economic and social effects; Energy efficiency; Graphics processing unit; Learning systems; Memory architecture; Mobile computing; Program processors; Energy efficient; GPGPU; Heterogeneous systems; Learning frameworks; Machine learning problem; Mobile computing environment; OpenCL; Shared memory architecture; Deep learning
|Appears in Collections:
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.