Pedestrian Detection from Lidar Data via Cooperative Deep and Hand-Crafted Features
Journal
Proceedings - International Conference on Image Processing, ICIP
ISBN
9781479970612
Date Issued
2018-08-29
Author(s)
Lin, Tzu Chieh
Tan, Daniel Stanley
Tang, Hsueh Ling
Chien, Shih Che
Chang, Feng Chia
Chen, Yung Yao
Hua, Kai Lung
Abstract
Autopilot systems need to be able to detect pedestrians with high precision and recall regardless of whether it is during the day or night. This means that we cannot rely on normal cameras to sense the surroundings due to its sensitivity to lighting conditions. An alternative for images is to use light detection and ranging sensors (LiDAR) that produces three-dimensional point clouds where each point represents the distance to an object. However, most pedestrian detection systems are designed for image inputs and not on distance point clouds. In this paper, we propose a method for detecting pedestrians using only the three-dimensional point clouds generated by the LiDAR. Our approach first projects the three-dimensional point cloud into a two-dimensional plane. We then extract both hand-crafted features and learned features from a convolutional neural network in order to train a support vector machine (SVM) to detect pedestrians. Our proposed method achieved significant improvements in terms of F1-measurement over prior state-of-the-art methods.
Subjects
Deep learning | Lidar | Pedestrian detection
Type
conference paper
