貝蘇章Pei, Soo-Chang臺灣大學:電信工程學研究所郭姿玲Kuo, Tzu-LingTzu-LingKuo2010-07-012018-07-052010-07-012018-07-052008U0001-2207200817531900http://ntur.lib.ntu.edu.tw//handle/246246/188208人類視覺注意力系統是近來熱門的話題。人類視覺注意力系統主要是利用數學演算法計算出圖形或視訊蘊藏的特定資訊;此類特定資訊,泛指早期發展出的靈長類動物視覺系統的神經元結構和行為所接收並有所反應的資訊。其理論可廣泛應用於機器人的行動設計或是人工智慧的設計。目前已有許多理論被提出,同時學術界亦有許多利用視覺注意力模型來設計演算法的應用,像是圖片的物體切割、視訊的物體偵測、物體辨識等。 本論文主旨是用演算模型模擬人類視覺達到偵測物體的功能。視覺注意力模型可以從影像或是視訊中萃取出意圖特徵並找出顯著點或是顯著區。其中,顯著點或顯著區廣泛地被利用指人類觀看圖片或是視訊時直覺上的注意點或是注意處。現存亦有許多演算法來計算出人類眼睛對於圖片或是視訊的顯著點或顯著處。在此,我們基於「顯著」的概念,實現了兩個視覺注意力模型,像是以顯著圖或是顯著體積方式表示視覺注意力模型。之後,我們利用視覺注意力模型融合統計的概念,設計出偵測數位餘弦轉換後的視訊資料中移動的物體並以顯著圖表示之。Human visual attention system is a popular topic in recent years. The human visual attention system addresses the situation of computational implementation of intentional attention in the human vision. The human visual attention system is widely applied in the design of robot or automatic intelligence. In many researches, implementations about object segmentations, object recognitions, and object detections are proposed more and more frequently.In this thesis, we mainly display two methods and implementations to simulate the human visual attention model. The output is denoted as saliency. Saliency means the place where human eyes emphasis on the most when first looking at an image. We displayed the algorithms that are widely used as the basic of the build of attention model for images. Moreover, another brand new concept of the salient model representation for videos is displayed here. Detecting moving objects in videos is an issue that people has discussed with high frequency in recent years. An algorithm for the real-time implement is now a developing and popular issue. Also, it presents a concept about the real-time moving object detection in time domain and another similar concept applied in DCT data domain in videos.CONTENTS試委員會審定書謝 i文摘要 iiiBSTRACT vONTENTS viiIST OF FIGURES xiIST OF TABLES xiiihapter 1 Introduction 1hapter 2 Visual Attention Model 3.1 Introduction 3.2 Bottom-up Attention Model 4.3 Top-down Attention Model 6hapter 3 Bottom-up Visual Attention Model for Object-of Interest Image 7.1 Introduction 7.2 Saliency Map Generation 7.2.1 Color Model Transformation and Down-sample 9.2.2 Feature Map Generation 11.2.3 Saliency Map Generation 12.3 Experiment 14.4 Conclusion 19hapter 4 Bottom-up Spatiotemporal Visual Attention Model for Video Analysis 21.1 Introduction 21.2 Video Pre-processing 22.2.1 Shot Detection 22.2.2 Video Volume Generation 23.2.3 Simplification/ Filtering 24.3 Feature Volume Generation 26.3.1 Gaussian Pyramid 26.3.2 Intensity and Color Volume Generation 27.3.3 2D and 3D Orientation Volume 28.4 Saliency Volume Generation 31.4.1 Center-surround Difference 31.4.2 Normalization 32.4.3 Conspicuity Volume Generation 33.4.4 Saliency Volume Generation 34.5 Experiment Result 34.6 Conclusion 36hapter 5 Moving Object Detection 37.1 Introduction 37.2 Static Model in Time Domain 38.2.1 Color Coordinate Transformation 41.2.2 Memory Confirmation 41.2.3 Parameter Calculation 41.2.4 Detecting Filter 42.2.5 Another Method for the Detection 43.3 Static Model in DCT Domain 45.3.1 The DCT Transform 46.3.2 The revised algorithm 49.4 Experiment Result 52.4.1 In the Temporal domain 53.4.2 In DCT Domain 57.5 Conclusion 59hapter 6 Conclusion and Future Work 61.1 Conclusion 61.2 Future Work 61EFERENCE 633386007 bytesapplication/pdfen-US視覺注意力顯著點顯著圖物體偵測visual attentionsaliency pointsaliency mapobjection detection基於視覺注意力模型之物體偵測演算法Object Detection Methods Based on the Visual Attention Modelthesishttp://ntur.lib.ntu.edu.tw/bitstream/246246/188208/1/ntu-97-R95942088-1.pdf