臺灣大學: 電機工程學研究所陳永耀陳政維Chen, Cheng-WeiCheng-WeiChen2013-03-272018-07-062013-03-272018-07-062011http://ntur.lib.ntu.edu.tw//handle/246246/253997利用影像擷取裝置獲取被攝物的距離——即深度還原——可廣泛地利用在智慧型三維立體應用技術開發。「散焦測距法」為電腦視覺研究中深度還原的方法之一,其主要概念為在單一視角優勢下,利用散焦模糊與影像深度距離資訊的幾何關聯,在影像中量測模糊程度以還原深度。本論文基於傅立葉轉換的時頻域縮放關係提出一個新的方法,以散焦邊緣梯度取得的線擴散函數頻譜能量做為模糊程度的量化標準,從而由單張影像中邊緣區域的模糊程度與預先提供的相機內部參數進行影像深度還原。不同於過去提出的散焦測距法,此新方法簡單直覺並且相當有效地量化模糊程度,而不用對造成模糊的擴散函數進行建模及參數調和,進而避免因為諸如高斯函數建模所造成的誤差,提升深度還原的準確度。本論文並以未經校正的消費型數位相機進行實驗,結果顯示新提出的深度還原方法具有相當良好的精確度。Recovering depth information, distances between objects and the camera, from images is a convenient and practical approach for intelligent 3D technologies. Depth-from-Defocus (DFD), one of the depth recovery methods, utilizes defocus blur to estimate the depth information. The approach has an advantage of using only one single image instead of multiple images to achieve depth recovery. In this thesis, a novel idea to represent the defocus blur amount by the spectral energy of line spread function that is directly derived from defocused step edge is proposed. As a result, the depth information can be recovered from a single image using spectral energy with known internal camera parameters. Unlike previous DFD methods, our method does not model spread functions with Gaussian functions. The direct usage of the spectral energy as the amount of blurriness eliminates the modeling error existent with the Gaussian function-based approaches. The experiments using an uncalibrated commercial digital camera have validated the proposed method and shown a considerably good accuracy in depth recovery.7830647 bytesapplication/pdfen-US深度還原散焦模糊散焦測距法頻譜能量點擴散函數線擴散函數Depth recoverydefocus blurDepth from defocusspectral energypoint spread functionline spread function深度還原自單張失焦影像之線擴散函數頻譜能量Depth Recovery by Spectral Energy with Line Spread Functions from a Single Defocused Imagethesishttp://ntur.lib.ntu.edu.tw/bitstream/246246/253997/1/ntu-100-R98921002-1.pdf