https://scholars.lib.ntu.edu.tw/handle/123456789/632187
標題: | PUMP: Profiling-free Unified Memory Prefetcher for Large DNN Model Support | 作者: | Lin C.-H Lin S.-F Chen Y.-J Jenp E.-Y CHIA-LIN YANG |
公開日期: | 2022 | 卷: | 2022-January | 起(迄)頁: | 122-127 | 來源出版物: | Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC | 摘要: | Modern DNNs are going deeper and wider to achieve higher accuracy. However, existing deep learning frameworks require the whole DNN model to fit into the GPU memory when training with GPUs, which puts an unwanted limitation on training large models. Utilizing NVIDIA Unified Memory (UM) could inherently support training DNN models beyond GPU memory capacity. However, naively adopting UM would suffer a significant performance penalty due to the delay of data transfer. In this paper, we propose PUMP, a Profiling-free Unified Memory Prefetcher. PUMP exploits GPU asynchronous execution for prefetch; that is, there exists a delay between the time that CPU launches a kernel and the time the kernel executes in GPU. PUMP extracts memory blocks accessed by the kernel when launching and swaps these blocks into GPU memory. Experimental results show PUMP achieves about 2x speedup on the average compared to the baseline that naively enables UM. © 2022 IEEE. |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85126087424&doi=10.1109%2fASP-DAC52403.2022.9712507&partnerID=40&md5=49370ca6ef04b606e489861a73121eee https://scholars.lib.ntu.edu.tw/handle/123456789/632187 |
DOI: | 10.1109/ASP-DAC52403.2022.9712507 | SDG/關鍵字: | Data transfer; Deep learning; Graphics processing unit; Program processors; Pumps; Asynchronous executions; High-accuracy; Large models; Learning frameworks; Memory blocks; Memory capacity; Performance penalties; Prefetches; Memory architecture |
顯示於: | 資訊工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。