|Title:||Acquiring 360° Light Field by a Moving Dual-Fisheye Camera||Authors:||Lo, I. Chan
HOMER H. CHEN
|Keywords:||360Â° cameras | 360Â° light field | convolutional neural network | depth estimation | light field generation||Issue Date:||1-Jan-2023||Journal Volume:||32||Source:||IEEE Transactions on Image Processing||Abstract:||
In this paper, we propose an efficient deep learning pipeline for light field acquisition using a back-to-back dual-fisheye camera. The proposed pipeline generates a light field from a sequence of 360° raw images captured by the dual-fisheye camera. It has three main components: a convolutional network (CNN) that enforces a spatiotemporal consistency constraint on the subviews of the 360° light field, an equirectangular matching cost that aims at increasing the accuracy of disparity estimation, and a light field resampling subnet that produces the 360° light field based on the disparity information. Ablation tests are conducted to analyze the performance of the proposed pipeline using the HCI light field datasets with five objective assessment metrics (MSE, MAE, PSNR, SSIM, and GMSD). We also use real data obtained from a commercially available dual-fisheye camera to quantitatively and qualitatively test the effectiveness, robustness, and quality of the proposed pipeline. Our contributions include: 1) a novel spatiotemporal consistency loss that enforces the subviews of the 360° light field to be consistent, 2) an equirectangular matching cost that combats severe projection distortion of fisheye images, and 3) a light field resampling subnet that retains the geometric structure of spherical subviews while enhancing the angular resolution of the light field.
|Appears in Collections:||電機工程學系|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.