Human Robot Interaction with Vision-Based Simultaneous Localization and Mapping for Service Robotics
Date Issued
2012
Date
2012
Author(s)
Chen, Kuan-Yu
Abstract
With development of technology, automation products have been an indispensable part in human’s daily lives. Along with the aging era, robot will come into our daily lives gradually. Intelligent service robot is developed for this purpose. Service robot should have ability to move and self-localization in unknown environment, so that it would achieve the application to interact with human or its surrounding.
The purpose of this thesis is to develop a system which uses visual landmarks as position reference to locate itself and build the surrounding map simultaneously. The system also consists of human robot interaction aspect. Through interacting with human, the robot records visual landmarks and locate itself to satisfy autonomy demand. The experiments are separated into two main parts in this thesis. The first part is designed to verify if human enter the visual range of robot during executing V-SLAM, what will be the impact on result. The second part is combining the result of first part and human robot interaction together. Let robot follow people and construct the map of its surrounding simultaneously.
This thesis employs FAST corner detector to extract the location of feature points and obtains 3D coordinates by stereo vision and Kinect. We propose a concept named “feature buffer”, and the feature buffer would filter out temporary feature points, and it reduces computational loading of system. Besides, we also propose an idea about “feature orientation”. Feature orientation would distinguish the landmarks clustering together and enhance the uniqueness of landmarks, so that we would get better result of data association. In this thesis the Histogram of oriented gradient (HOG) and the depth data of Kinect are used as human detector. The human detector finds out the location of human, and the system uses this information to remove the feature points on human body and also employs this information to follow people. Finally, we use the Extended Kalman Filter (EKF) to fuse the error of sensor and odometer. After fusing, the possible locations of robot and landmarks are calculated, and the map of the environment is constructed.
In this thesis all the systems models and software frameworks are implemented with C/C++ programming language, MRPT (The mobile robot programming toolkit), and OPENNI library, and all of them are integrated and developed in Visual Studio.
The purpose of this thesis is to develop a system which uses visual landmarks as position reference to locate itself and build the surrounding map simultaneously. The system also consists of human robot interaction aspect. Through interacting with human, the robot records visual landmarks and locate itself to satisfy autonomy demand. The experiments are separated into two main parts in this thesis. The first part is designed to verify if human enter the visual range of robot during executing V-SLAM, what will be the impact on result. The second part is combining the result of first part and human robot interaction together. Let robot follow people and construct the map of its surrounding simultaneously.
This thesis employs FAST corner detector to extract the location of feature points and obtains 3D coordinates by stereo vision and Kinect. We propose a concept named “feature buffer”, and the feature buffer would filter out temporary feature points, and it reduces computational loading of system. Besides, we also propose an idea about “feature orientation”. Feature orientation would distinguish the landmarks clustering together and enhance the uniqueness of landmarks, so that we would get better result of data association. In this thesis the Histogram of oriented gradient (HOG) and the depth data of Kinect are used as human detector. The human detector finds out the location of human, and the system uses this information to remove the feature points on human body and also employs this information to follow people. Finally, we use the Extended Kalman Filter (EKF) to fuse the error of sensor and odometer. After fusing, the possible locations of robot and landmarks are calculated, and the map of the environment is constructed.
In this thesis all the systems models and software frameworks are implemented with C/C++ programming language, MRPT (The mobile robot programming toolkit), and OPENNI library, and all of them are integrated and developed in Visual Studio.
Subjects
vision-based simultaneous localization and mapping
Extended Kalman Filter
intelligent service robot
digital image processing
human robot interaction
Type
thesis
File(s)![Thumbnail Image]()
Loading...
Name
index.html
Size
23.27 KB
Format
HTML
Checksum
(MD5):3c2f7548e374a791ee2f37683ed2d0a8