https://scholars.lib.ntu.edu.tw/handle/123456789/581143
Title: | Recurrent Reinforcement Learning for Predictive Overall Equipment Effectiveness | Authors: | Liao D.-Y Tsai W.-P Chen H.-T Ting Y.-P Chen C.-Y Chen H.-C SHI-CHUNG CHANG |
Keywords: | Chemical vapor deposition; Long short-term memory; Quality control; Real time systems; Recurrent neural networks; Stochastic models; Stochastic systems; Chemical vapor depositions (CVD); Overall equipment effectiveness; Predictive; Production time; Real-time data; Recurrent reinforcement learning; Stochastic dynamics; Tool condition; Reinforcement learning | Issue Date: | 2018 | Source: | e-Manufacturing and Design Collaboration Symposium 2018, eMDC 2018 - Proceedings | Abstract: | With the increasing huge amount of real-time data being collected in modern manufacturing systems, the conventional indices defined to evaluate productivity, quality and performance become less effective. Compared to the conventional Overall Equipment Effectiveness (OEE), the predictive OEE (POEE) evaluates and monitors the forthcoming effectiveness of a single tool. Its predictive effectiveness is based on the extra production time due to anomaly tool conditions and undesired product quality. This research develops a recurrent reinforcement learning model to predict the predictable elements in calculating the POEE. Our model combines supervised Long-Short Term Memory (LSTM) and reinforced Deep Q-Network (DQN) techniques in predicting stochastic dynamics in production and quality. A Chemical Vapor Deposition (CVD) tool is taken as an exemplary case to illustrate the calculation of POEE. ? 2018 Taiwan Semiconductor Industry Association. |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85057445841&partnerID=40&md5=bd4ec8661ab2a212fda686085638287c https://scholars.lib.ntu.edu.tw/handle/123456789/581143 |
Appears in Collections: | 電機工程學系 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.