Chang, Chia WeiChia WeiChangLiou, Jing JiaJing JiaLiouHuang, Chih TsunChih TsunHuangWEI-CHUNG HSULu, Juin MingJuin MingLu2024-02-172024-02-172023-01-01979835034291810636404https://scholars.lib.ntu.edu.tw/handle/123456789/639717In order to facilitate the deployment of diverse deep learning models while maintaining scalability, modern DNN accelerators frequently employ reconfigurable structures such as Network-on-Chip (NoC) and multi-level on-chip memory hierarchy. To achieve high energy efficiency, it is imperative to store intermediate DNN-layer results within the on-chip memory hierarchy, thereby reducing the need for off-chip data transfers to/from the DRAM memory.Two well-established optimization techniques, node fusion and loop tiling, have proven effective in retaining temporary results within the on-chip buffers, commonly used to minimize off-chip DRAM accesses. In this paper, we introduce MultiFuse, an infrastructure designed to automatically explore multiple DNN layer node fusion techniques, enabling optimal utilization of the on-chip multi-level memory hierarchy.Experimental results demonstrate the effectiveness of our retargetable infrastructure, which outperforms Ansor's algorithm. Our exploration algorithm achieves a remarkable 70% reduction in Energy-Delay Product (EDP) while gaining a 67x speedup in search time when executing the data-intensive MobileNet model on a single DNN accelerator.DNN | DNN Compilers | Hardware Accelerators | Node FusionMultiFuse: Efficient Cross Layer Fusion for DNN Accelerators with Multi-level Memory Hierarchyconference paper10.1109/ICCD58817.2023.000972-s2.0-85182336049https://api.elsevier.com/content/abstract/scopus_id/85182336049