Hsu P.-Y.Cheng W.-F.Hsieh P.-J.Lin Y.-L.WINSTON HSU2019-07-102019-07-1020159781450334594https://scholars.lib.ntu.edu.tw/handle/123456789/413010https://www.scopus.com/inward/record.uri?eid=2-s2.0-84962921161&doi=10.1145%2f2733373.2806404&partnerID=40&md5=1a9f6aa1e8b2bed518f99fa023b2caf3With rapid growth of egocentric videos from wearable de-vices, the need for instant video event detection is emerg-ing. Different from conventional video event detection, it requires more considerations on real-Time event detection and immediate video recording due to the computational cost on wearable devices (e.g., Google Glass). Conventional work of video event detection analyzed video content in an offine process and it is time-consuming for visual analysis. Observing that wearable devices are usually along with sen-sors, we propose a novel approach for instant event detection in egocentric videos by leveraging sensor-based motion con-text. We compute statistics of sensor data as features. Next, we predict the user's current motion context by a hierarchi-cal model, and then choose the corresponding ranking model to rate the importance score of the timestamp. With impor-tance score provided in real-Time, camera on the wearable device can dynamically record micro-videos without wasting power and storage. In addition, we collected a challenging daily-life dataset called EDS (Egocentric Daily-life Videos with Sensor Data), which contains both egocentric videos and sensor data recorded by Google Glass of different sub-jects. We evaluate the performance of our system on the EDS dataset, and the result shows that our method outper-forms other baselines. ? 2015 ACM.Egocentric Video; Event Detection; Sensor; Wearable Device[SDGs]SDG16Digital storage; Glass; Motion analysis; Sensors; Video recording; Wearable computers; Wearable technology; Computational costs; Egocentric Video; Event detection; Ranking model; Video contents; Video event detections; Visual analysis; Wearable devices; Wearable sensorsReal-Time instant event detection in egocentric videos by leveraging sensor-based motion contextconference paper10.1145/2733373.28064042-s2.0-84962921161