電機資訊學院: 資訊工程學研究所指導教授: 陳彥仰張巧慧Chang, Chiao-HuiChiao-HuiChang2017-03-032018-07-052017-03-032018-07-052015http://ntur.lib.ntu.edu.tw//handle/246246/275415Selecting objects in real-world settings is currently difficult to automate and requires significant manual effort. We propose a gaze-based gesture approach using wearable eye trackers. However, achieving effective gaze-based selection of real-world object has several challenges, such as the issue of Double Role and Midas touch. Prior studies required explicit manual activation/deactivation to confirm the user’s intention, which impede fast and continuous interaction. We present EyeLasso - a fast gaze-based selection technique that allows users to select the target they see with only a single Lasso gaze gesture, without requiring additional manual input. EyeLasso uses Random Forest learning for gesture detection and GrabCut using OpenCV for improving the accuracy of target selection. Results from our 6-user experiments and 10-object-selection tasks in both gesture detection and item selection show that EyeLasso selected the target with 90% accuracy, without requiring manual input (0.17 times unintended selections in two minutes, 10% false negative rate).5644216 bytesapplication/pdf論文公開時間: 2015/8/11論文使用權限: 同意有償授權(權利金給回饋學校)眼神互動眼神追蹤Gestural InteractionInput and Interaction TechnologiesEye trackerSmart glasses智慧眼鏡眼神目標選取技術EyeLasso: Real-World Object Selection using Gaze-based Gesturesthesishttp://ntur.lib.ntu.edu.tw/bitstream/246246/275415/1/ntu-104-R02922024-1.pdf