Chen, Yung HsiuYung HsiuChenYang, Wu TeWu TeYangChen, Bo HsunBo HsunChenPEI-CHUN LIN2023-07-142023-07-142023-03-012075-1702https://scholars.lib.ntu.edu.tw/handle/123456789/633613This article reports the construction of an articulated manipulator’s hybrid dynamic model and trajectory planning and optimization of the manipulator using deep reinforcement learning (RL) on the dynamic model. The hybrid model was composed of a physical-based reduced-order dynamic model, linear friction and damping terms, and a deep neural network model to compensate for the nonlinear characteristics of the manipulator. The hybrid model then served as the digital twin of the manipulator for trajectory planning to optimize energy efficiency and operation speed by using RL while taking obstacle avoidance into consideration. The proposed strategy was simulated and experimentally validated. The energy consumption along paths was reduced and the speed was increased so the manipulator could achieve more efficient motion.deep reinforcement learning | energy/speed optimization | obstacle avoidance | trajectory planning[SDGs]SDG7Manipulator Trajectory Optimization Using Reinforcement Learning on a Reduced-Order Dynamic Model with Deep Neural Network Compensationjournal article10.3390/machines110303502-s2.0-85152073151https://api.elsevier.com/content/abstract/scopus_id/85152073151