First-Person View Hand Parameter Estimation Based on Fully Convolutional Neural Network
Journal
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Journal Volume
12047 LNCS
Pages
224-237
Date Issued
2020
Author(s)
Abstract
In this paper, we propose a real-time framework that can not only estimate location of hands within a RGB image but also their corresponding 3D joint coordinates and their hand side determination of left or right handed simultaneously. Most of the recent methods on hand pose analysis from monocular images only focus on the 3D coordinates of hand joints, which cannot give a full story to users or applications. Moreover, to meet the demands of applications such as virtual reality or augmented reality, a first-person viewpoint hand pose dataset is needed to train our proposed CNN. Thus, we collect a synthetic RGB dataset captured in an egocentric view with the help of Unity, a 3D engine. The synthetic dataset is composed of hands with various posture, skin color and size. We provide 21 joint annotations including 3D coordinates, 2D locations, and corresponding hand side which is left hand or right hand for each hand within an image. ? 2020, Springer Nature Switzerland AG.
Subjects
Augmented reality; Convolution; Pattern recognition; Virtual reality; 3D coordinates; First person; Hand pose estimations; Joint coordinates; Monocular image; OR applications; Right handed; Synthetic data; Convolutional neural networks
Type
conference paper
