https://scholars.lib.ntu.edu.tw/handle/123456789/581143
標題: | Recurrent Reinforcement Learning for Predictive Overall Equipment Effectiveness | 作者: | Liao D.-Y Tsai W.-P Chen H.-T Ting Y.-P Chen C.-Y Chen H.-C Chang S.-C. SHI-CHUNG CHANG |
關鍵字: | Chemical vapor deposition; Long short-term memory; Quality control; Real time systems; Recurrent neural networks; Stochastic models; Stochastic systems; Chemical vapor depositions (CVD); Overall equipment effectiveness; Predictive; Production time; Real-time data; Recurrent reinforcement learning; Stochastic dynamics; Tool condition; Reinforcement learning | 公開日期: | 2018 | 來源出版物: | e-Manufacturing and Design Collaboration Symposium 2018, eMDC 2018 - Proceedings | 摘要: | With the increasing huge amount of real-time data being collected in modern manufacturing systems, the conventional indices defined to evaluate productivity, quality and performance become less effective. Compared to the conventional Overall Equipment Effectiveness (OEE), the predictive OEE (POEE) evaluates and monitors the forthcoming effectiveness of a single tool. Its predictive effectiveness is based on the extra production time due to anomaly tool conditions and undesired product quality. This research develops a recurrent reinforcement learning model to predict the predictable elements in calculating the POEE. Our model combines supervised Long-Short Term Memory (LSTM) and reinforced Deep Q-Network (DQN) techniques in predicting stochastic dynamics in production and quality. A Chemical Vapor Deposition (CVD) tool is taken as an exemplary case to illustrate the calculation of POEE. ? 2018 Taiwan Semiconductor Industry Association. |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85057445841&partnerID=40&md5=bd4ec8661ab2a212fda686085638287c https://scholars.lib.ntu.edu.tw/handle/123456789/581143 |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。