Acquiring 360° Light Field by a Moving Dual-Fisheye Camera
Journal
IEEE Transactions on Image Processing
Journal Volume
32
Date Issued
2023-01-01
Author(s)
Lo, I. Chan
Abstract
In this paper, we propose an efficient deep learning pipeline for light field acquisition using a back-to-back dual-fisheye camera. The proposed pipeline generates a light field from a sequence of 360° raw images captured by the dual-fisheye camera. It has three main components: a convolutional network (CNN) that enforces a spatiotemporal consistency constraint on the subviews of the 360° light field, an equirectangular matching cost that aims at increasing the accuracy of disparity estimation, and a light field resampling subnet that produces the 360° light field based on the disparity information. Ablation tests are conducted to analyze the performance of the proposed pipeline using the HCI light field datasets with five objective assessment metrics (MSE, MAE, PSNR, SSIM, and GMSD). We also use real data obtained from a commercially available dual-fisheye camera to quantitatively and qualitatively test the effectiveness, robustness, and quality of the proposed pipeline. Our contributions include: 1) a novel spatiotemporal consistency loss that enforces the subviews of the 360° light field to be consistent, 2) an equirectangular matching cost that combats severe projection distortion of fisheye images, and 3) a light field resampling subnet that retains the geometric structure of spherical subviews while enhancing the angular resolution of the light field.
Subjects
360° cameras | 360° light field | convolutional neural network | depth estimation | light field generation
Type
journal article