EverTutor: Automatically Creating Interactive Guided Tutorials on Smartphones by User Demonstration
Date Issued
2014
Date
2014
Author(s)
Wang, Cheng-Yao
Abstract
We present EverTutor, a system that automatically generates interactive tutorials on smartphone from user demonstration. For tutorial authors, it simplifies the tutorial creation. For tutorial users, it provides contextual step-by-step guidance and avoids the frequent context switching between tutorials and users'' primary tasks. In order to generate the tutorials automatically, EverTutor records low-level touch events to detect gestures and identify on-screen targets. When a tutorial is browsed, the system uses vision-based techniques to locate the target regions and overlays the corresponding input prompt contextually. It also identifies the correctness of users'' interaction to guide the users step by step. We conducted a 6-person user study for creating tutorials and a 12-person user study for browsing tutorials, and we compared EverTutor''s interactive tutorials to static and video ones. Study results show that creating tutorials by EverTutor is simpler and faster than producing static and video tutorials. Also, when using the tutorials, the task completion time for interactive tutorials were 3-6 times faster than static and video tutorials regardless of age group. In terms of user preference, 83% of the users chose interactive type as the preferred tutorial type and rated it easiest to follow and easiest to understand.
Subjects
互動式教學
觸控手勢
智慧手機
Type
thesis
File(s)![Thumbnail Image]()
Loading...
Name
ntu-103-R00944052-1.pdf
Size
23.32 KB
Format
Adobe PDF
Checksum
(MD5):335c5d9b2630e856b2a259ecac707652
