https://scholars.lib.ntu.edu.tw/handle/123456789/119026
標題: | 利用自動物體分割從多視角影像建立3D 模型 3D Reconstruction by Automatic Object Segmentation from Multi-view image |
作者: | 王映萱 Wang, Ying-Hsuang |
關鍵字: | 3D模型;物體分割;多視角影像;自動化;3D model;object segmentation;multi-view image;automatic | 公開日期: | 2015 | 摘要: | 隨著3D列印技術越來越流行,對於3D模型的需求也將與日俱增。然而模型的取得並不容易,建立真實世界中物體的3D模型通常需要有經驗和背景的專家花費許多時間才得以完成。 本篇論文提出了一套方法使得3D模型的製作可以脫離經驗與知識的限制,讓一般使用者也能輕鬆製作3D模型。首先我們開發了一個行動裝置上的應用軟體來引導使用者對於目標物體拍攝足以建構模型的照片。接著為了避免在重建過程中將背景一起重建,我們提出了一個全自動的物體分割方法來分離前景背景,並以分割結果製作視覺外殼(visual hull)來當成最終的模型。此方法基於馬可夫隨機場(Markov random field)的架構,將物體/背景的色彩模型、極幾何(epipolar geometry)限制、不同影像間特徵點的匹配等條件結合成單一的能量函式,並使用圖割(graph cut)演算法來最小化此函式以得到分割結果。利用此分割結果製作視覺外殼(visual hull)並投影回各張影像可確保物體輪廓能符合空間一致性,再根據目前的輪廓更新物體的色彩模型。重複使用圖割演算法和更新色彩模型的步驟直到結果收斂為止。 對於沒有紋理的物體,一般多視角立體重建(multi-view stereo)的方法通常會失敗,而我們的方法則能克服這種限制。同時結合色彩與空間的限制,能使得前景背景色彩分布重疊的區域得以分離,而這是傳統只使用色彩為條件的分割方法所不能做到的。 As 3D printing technique becomes more popular, the requirements of 3D models also increase. However, even for an experienced expert, making a 3D model from real world object takes a long time, and needless to say, it’s not an easy task for people without any background knowledge. In this thesis, we propose an approach that allows arbitrary users to create their own 3D models without any experience and background knowledge. First, we develop a guidance application on mobile device which guides users to take sufficient images from the target object. Second, in order to avoid the background being reconstructed as part of the 3D models, we design an automatic object segmentation method to separate foreground and background in multi-view image. Third, we use the segmentation masks to make a visual hull as our final output. The key behind our approach is a MRF framework that combines foreground/background appearance model, epipolar geometry constraints, and feature matching constraints into a single energy function. Therefore, we can use graph cut algorithm to efficiently minimize this function and get the segmentation result. We create a visual hull of the object from the segmentation masks, and then back-projecting it to all the images to make the silhouettes consistent in all view. The consistent silhouettes are used to update our foreground appearance model. We iteratively apply graph cut step and the update step until the segmentation converges. Our method is able to reconstruct a texture-less object, which remains a challenge for most of MVS algorithm. In addition, by taking color and spatial constraints into concern, our approach can separate foreground and background that are overlapping in color space, which is difficult for the traditional object segmentation method. |
URI: | http://ntur.lib.ntu.edu.tw//handle/246246/275395 | Rights: | 論文公開時間: 2020/8/25 論文使用權限: 同意有償授權(權利金給回饋學校) |
顯示於: | 資訊工程學系 |
檔案 | 描述 | 大小 | 格式 | |
---|---|---|---|---|
ntu-104-R02922005-1.pdf | 23.32 kB | Adobe PDF | 檢視/開啟 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。