2019-08-012024-05-17https://scholars.lib.ntu.edu.tw/handle/123456789/687843摘要:近年來由於深度神經網路模型的突破,加上電腦計算能力不斷地提高,使得產業界再度重視類神經網路,也因此催生了大規模且效果顯著的應用案例,以及對應的學術研究課題,其中,困擾了類神經網路數十年來的黑盒子問題,也重新受到檢視。深度神經網路隱藏層之間相連權重的計算往往經過線性與非線性的轉換,即便給定一個訓練良好的深度神經網路,能在輸出層得到一穩定且準確的預判,但對整個模型的推理過程往往不得而知,故常被稱為黑盒子運作,無法建立輸入層與輸出預測值之間的關聯,將使模型的可信度與被採用度大大降低,在許多應用場景中,輸入層中的變數對使用者具有確切的物理意義與解釋能力,回溯至輸入層並揪出重要因素至關重要。 本研究鑑於深度神經網路在產業或學術界均已大量採用並成效顯著,惟其可釋性多受侷限,致使在特定產業,如製造業良率分析、製程偵錯上,其應用仍受質疑。在本研究中,將先回顧深度神經網路可釋性當前發展,並以高斯貝氏網路模型來拆解已經訓練完成的深度學習模型,利用層層剝離推論的方式,發展出一套具先後順序的推論、解析框架,套用到不同架構的類神經網路模型上,將其黑盒子中的推理邏輯給判斷出來,提升產業採行深度神經網路技術的信心。<br> Abstract: As the breakthrough of Deep Neural Nets (DNNs) accompanied by the advanced computing power in recent years, artificial neural network models are again back to the stage and extensively applied in industries as well as widely studied in the academics. DNNs are known for the (linear/nonlinear) complex transformation throughout many hidden layers. Therefore, given a well-trained DNN that makes good classifications, it is still a mystery the inference process within the model, making DNNs also known as black boxes. Without any causal inferences from the output, i.e., the classified results, to the input factors, the creditability of a DNN model is not complete and the potential for the model to be implemented reduces. Especially, in many application domains, the factors used in the input layer are meaningful and can be self-explained intuitively. It is extremely critical to be able to trace back to the input layer for the identification of key contributors. In those industries where the model causality is regarded as important as the predictability, such as the yield analysis and the process control in the manufacturing context, Explainable AI (XAI) would be the key to open the massive implementation of DNN models. In this research, related literature will be fully reviewed to obtain an overview of current XAI development. Bayesian inference, which serves as the powerful tool to reason the variable causalities, will be employed to build up the layer-to-layer Gaussian Bayesian Network (GBN). The DBN can model the relationship of two adjacent layers in the well-established DNN. With the propagating structure in the DNN model, the temporal inference logic of the “black box” can be identified. The confidence in adopting the DNN techniques can be greatly enhanced.模型可釋性深度神經網路貝氏推論高斯貝氏網路Explainable AI (XAI)Deep Neural NetBayesian InferenceGaussian Bayesian Network深度神經網路模型可釋性分析與可信度精進