Multi-Modal Emotion Recognition for Human-Robot Interaction
Date Issued
2016
Date
2016
Author(s)
Li, Shang-Ting
Abstract
Robotics has seen much development over the last few decades. The way in which robots live with humans has become an important issue. Robots need to understand human social cues and rules to correctly interact with human in home and public environments. Therefore, in order to a reach natural and harmonious interaction between humans and robots, the human-robot interaction is a key issue. This thesis integrates facial expression, body movement and speech tone to conduct multi-modal emotion recognition. We use recurrent neural network for learning model and fuzzy integral for multi-modal fusion to enhance the cognitive ability of robots to understand human behaviors and emotion. Emotion is a kind of high-level cognition which heavily affect humans’ behaviors and decision. We propose the multi-modal emotion recognition system allowing robots to predict emotion robustly. Multi-information provides more complete information for emotion recognition and deals with different environments. Each uni-model is trained by time sequence data and fused by dynamic adjustment. In terms of the proposed method, robots not only have the ability to predict emotion, but also have the human-like cognitive ability.
Subjects
Recurrent Neural Network
Fuzzy Integral
Emotion Recognition
Multi-modal Fusion
Mobile Robots
Human-robot Interaction
Type
thesis
File(s)
Loading...
Name
ntu-105-R03522805-1.pdf
Size
23.54 KB
Format
Adobe PDF
Checksum
(MD5):bf559c1934f79cc740ac20a342d0291a