|PUMP: Profiling-free Unified Memory Prefetcher for Large DNN Model Support
|Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC
Modern DNNs are going deeper and wider to achieve higher accuracy. However, existing deep learning frameworks require the whole DNN model to fit into the GPU memory when training with GPUs, which puts an unwanted limitation on training large models. Utilizing NVIDIA Unified Memory (UM) could inherently support training DNN models beyond GPU memory capacity. However, naively adopting UM would suffer a significant performance penalty due to the delay of data transfer. In this paper, we propose PUMP, a Profiling-free Unified Memory Prefetcher. PUMP exploits GPU asynchronous execution for prefetch; that is, there exists a delay between the time that CPU launches a kernel and the time the kernel executes in GPU. PUMP extracts memory blocks accessed by the kernel when launching and swaps these blocks into GPU memory. Experimental results show PUMP achieves about 2x speedup on the average compared to the baseline that naively enables UM. © 2022 IEEE.
|Data transfer; Deep learning; Graphics processing unit; Program processors; Pumps; Asynchronous executions; High-accuracy; Large models; Learning frameworks; Memory blocks; Memory capacity; Performance penalties; Prefetches; Memory architecture
|Appears in Collections:
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.