指導教授:陳少傑臺灣大學:電子工程學研究所曾柏叡Tseng, Po-JuiPo-JuiTseng2014-11-302018-07-102014-11-302018-07-102014http://ntur.lib.ntu.edu.tw//handle/246246/263927本研究改良了在 RGB-D 攝影機上所獲得的深度影像之品質。隨著新型的結 構光源攝影機的推出,例如:Microsoft Kinect 和Asus Xtion PRO LIVE,深度影像的獲得變得更便利且快速。這種深度影像可以應用在許多的領域上,如:虛擬實境、影像處理、3D 列印等等諸多應用。不過,這種深度的影像的產生通常都會伴隨著種種雜訊,像是不合法的影像深度值、錯誤的影像深度值、影像深度值在時間上所受到的擾動,這些雜訊會大幅度降低深度影像的應用推廣。為了 在未來能夠有更廣泛地應用,解決雜訊的干擾是提升影像品質、增加應用效果的 必要工作。因此,我們提出了一套有效的演算法可以成功解決以上所提及之雜訊 干擾,這套演算法是改良了exemplar-based 的影像修補方法[16];該演算法原本應用於填補彩色影像上所消失之區域之像素值,我們將之改良並且應用在深度影 像之雜訊的填補,進而提升RGB-D 攝影機所獲取之深度影像的影像品質。在實 驗最後的結果評估方面,我們將演算法實驗在日本筑波大學的立體影像資料庫 (Tsukuba Stereo Dataset)和Asus Xtion PRO LIVE 上所拍攝的深度影像,並且採用了峰值信噪比(Peak Signal-to-Noise Ratio)和計算時間量化的數據作比較,來證明我們所提出的演算法能夠大幅度提升深度影像的品質,讓深度影像在未來各種應用場合能有顯著的效果提升。This work presents a refinement procedure of depth map acquired by RGB-D(Depth) cameras. With the release of many new structured-light RGB-D cameras,such as Microsoft Kincet or Asus Xtion PRO LIVE, it is very conventional and consumer-accessible to acquire high-resolution depth maps. This 3D depth information can be applied to many fields, like augmented reality, image processing,and 3D printer. However, RGB-D cameras suffered from problems such as undesired occlusion, inaccuracy of depth value, and temporal variation. To broaden its application, it is crucial to solve the above-mentioned problems. Thus, The proposed novel algorithm based on the exemplar-based inpainting method to cope with the artifact in RGB-D cameras’ depth maps. This exemplar-based inpainting has been used to repair an object-removed image with missing information. The idea of this inpainting method is similar to the procedure of padding the occlusions of RGB-D cameras’ depth data. Therefore, the proposed method enhances and adjusts the inpainting method to fit and refine the image quality of RGB-D cameras’ depth data. For evaluating the experiment results, our proposed method will be tested on Tsukuba Stereo Dataset, which provides a 3D video with ground truth of depth maps, occlusion maps, and RGB images, PSNR, and computational time as evaluation metrics. Moreover, a set of self-shooting RGB-D depth maps and their refinement results will also be shown to prove the improvement of our performance compared with the original occluded depth maps.ABSTRACT ........................................................................................................ i LIST OF FIGURES .......................................................................................................v LIST OF TABLES .....................................................................................................vii CHAPTER 1 INTRODUCTION ........................................................................2 1.1 Development and Application .......................................................................1 1.1.1 3D Reconstruction .............................................................................2 1.1.2 Augmented Reality ............................................................................2 1.1.3 Image Processing ...............................................................................3 1.2 Motivation .....................................................................................................4 1.3 Thesis Organization.......................................................................................6 CHAPTER 2 BACKGROUND ..........................................................................5 2.1 Kinect-like Depth Camera Characteristics ....................................................7 2.1.1 Triangulation ......................................................................................8 2.1.2 Error Sources ................................................................................... 11 2.1.2.1 Sensor................................................................................ 11 2.1.2.2 Measurement Setup........................................................... 11 2.1.2.3 Properties of Object Surface .............................................12 2.2 Related Works on Filtering..........................................................................13 2.2.1 Convolution......................................................................................13 2.2.2 Gaussian Filter .................................................................................13 2.2.3 Bilateral Filter ..................................................................................14 2.2.4 Kinect-like Denoising Algorithm.....................................................15 2.2.4.1 Spatial-Temporal Depth Denoising...................................15 2.2.4.2 Temporal Denoising Algorithm ........................................17 2.2.4.3 Temporal Filtering ............................................................18 CHAPTER 3 METHODOLOGIES ..................................................................23 3.1 Exemplar-based Inpainting..........................................................................23 3.2 System Architecture.....................................................................................25 3.3.1 Edge Marking ..................................................................................28 3.3.2 Middleware Platform.......................................................................28 3.3.3 Hole Padding ...................................................................................32 3.3.3.1 3-Step Search....................................................................35 3.3.3.2 Modified 3-Step Search....................................................38 3.3.4 Updating Priority Value ...................................................................39 3.3.5 Temporal Filtering ...........................................................................39 CHAPTER 4 EXPERIMENTAL RESULTS ....................................................41 4.1 Experiments on Tsukuba Stereo Dataset .....................................................41 4.2 Experiments on Real-World Scene..............................................................48 4.3 Discussion....................................................................................................52 CHAPTER 5 CONCLUSION...........................................................................55 REFERENCE .....................................................................................................576855655 bytesapplication/pdf論文使用權限:不同意授權深度影像Asus Xtion PRO LIVEKinectRGB-D 攝影機影像修補深度圖像之時間與空間上的雜訊過濾Temporal and Spatial Denoising of Depth Mapsthesishttp://ntur.lib.ntu.edu.tw/bitstream/246246/263927/1/ntu-103-R01943155-1.pdf