https://scholars.lib.ntu.edu.tw/handle/123456789/118983
Title: | 基於紋理生成之光場影像編輯 Light-field Editing Using Patch-based Synthesis |
Authors: | 陳克威 Chen, Ke-Wei |
Keywords: | 光場相機;光場影像;影像修補;影像重組;影像編輯 | Issue Date: | 2014 | Abstract: | 在這篇論文中,我們提出了一個基於紋理生成的光場影像編輯之方法。我們從 光場影像修補這個問題出發,在解決光場影像修補這個問題之後,接著將我們的方 法應用到光場影像的縮放與光場影像的物件重新排列。在我們的方法中,光場影像 被視為是多視角的二維影像的集合,而不是由原始資料來解決這個問題。在給定一 個四維的光場影像之後,使用者將會在某的視角圈選需要被移除的物件,接著我們 把這個物件拓展到其它的視角,而所有的視角的這個物件也就是我們所需要修補 的四維空洞。我們的方法是利用二維影像的紋理生成演算法,拓展到四維空間,並 且在四維空間中定義補丁之間的距離函數來衡量四維空間中的相似度。換句話說, 我們利用這個方法一次生成整個四維影像,而非獨立的生成每一個視角的二維影 像。另外,我們也利用了深度圖以及極平面影像來做為輔助資料,幫助更準確的生 成四維光場影像資料。最後再將方法擴展到光場影像縮放與光場影像重新排列的 問題中。總結來說,我們提出了一個紋理生成的方法來達到光場影像編輯的目標。 此外,即使我們的深度資料非常粗糙,也足夠幫助我們更準確的生成四維光場影像。 We present an approach to light-field editing using patch-based method. We start from the light-field completion problem, and then apply our method to light-field retargeting and light-field reshuffling. In our work, we deal with light-field data at multi-view level rather than raw data level because raw data varies with different light-field cameras. Given a 4D light-field data, user specifies an object to be removed in a specific view, and then we transfer the contour of this region to all the other view. After obtaining the 4D region, we fill the 4D region based on patch-based synthesis. Compared to traditional 2D image completion by patch-based method, we extend 2D PatchMatch algorithm to 4D light-field PatchMatch. Our method aims to find the nearest neighbor field by measuring distances between 4D patches. In other words, we synthesize 4D light-field instead of independently synthesizing each 2D image. Besides, in order to retain spatial and angular consistency when we synthesizing target light-field, we introduce the depth map to help us finding similar patch. We also leverage depth maps to retain view consistency based on the concept of epipolar plane image (EPI). In our method, we further define a patch distance function based on color, depth map and EPI, and use the patch distance function to measure patch similarity. This framework is applied to light-field retargeting and reshuffling problem based on bidirectional similarity. To our knowledge, it is a first try on light-field editing. In summary, we propose a method to edit light-field by measuring 4D patch similarity and it is not necessary to have a good depth map. In our experiment, it works well even if we have only a rough depth maps. |
URI: | http://ntur.lib.ntu.edu.tw//handle/246246/261549 | Rights: | 論文公開時間:2016/08/08 論文使用權限:同意有償授權(權利金給回饋學校) |
Appears in Collections: | 資訊工程學系 |
File | Description | Size | Format | |
---|---|---|---|---|
ntu-103-R01922043-1.pdf | 23.32 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.