Su, Juyn-DaJuyn-DaSuPEI YUN TSAI2024-09-182024-09-182020https://www.scopus.com/record/display.uri?eid=2-s2.0-85100912188&origin=resultslisthttps://scholars.lib.ntu.edu.tw/handle/123456789/721459Virtual, Auckland, 7 December 2020 through 10 December 2020Deep reinforcement learning is a technique that allows the agent to have evolving learning capability for unknown environments and thus has the potential to surpass human expertise. The hardware architecture for DRL supporting on-line Q-learning and on-line training is presented in this paper. Two processing element (PE) arrays are used for handling evaluation network and target network respectively. Through configuration of two modes for PE operations, all required forward and backward computations can be accomplished and the number of processing cycles can be derived. Due to the precision required for on-line Q-learning and training, we propose flexible block floating-point (FBFP) to reduce the overhead of floating-point adders. The FBFP exploits different signal statistics during the learning process. Furthermore, the respective block exponents of gradients are adjusted following the variation of temporal difference (TD) error to reserve resolution. From the simulation results, the FBFP multiplier-and-accumulator (MAC) can reduce 15.8% of complexity compared to FP MAC while good learning performance can be maintained. © 2020 APSIPA.architecture designBlock floating-pointdeep Q networkreinforcement learningProcessing Element Architecture Design for Deep Reinforcement Learning with Flexible Block Floating Point Exploiting Signal Statisticsconference paper2-s2.0-85100912188