A Semantic Framework for Object-Based and Event-Based Video Content Adaptation
Date Issued
2008
Date
2008
Author(s)
Cheng, Wen-Huang
Abstract
In pervasive media environments, adaptation is one key technology to support universal multimedia access by transforming multimedia contents to fit the usage environments. In terms of personalization, effective adaptation can greatly benefit from taking into account the semantics of multimedia contents. The goal of this dissertation is to be able to provide systematic approaches to improve automatic multimedia adaptation at the semantic level.n this dissertation, a generic adaptation framework and the fundamental design principles are proposed. By exploiting specific domain knowledge, we bridge the gap between low-level computational features and high-level semantic concepts, whereby the associated adapting operations can be effectively designed to maximize the user’s multimedia experience. Based on the proposed framework, our works focus on the semantic adaptation of video contents, where two alternative approaches for semantics modeling are investigated: the object-based and the event-based. In the object-based approach, a visual model is constructed for locating semantic video objects so as to improve the user’s browsing experience of high-quality professional videos on the devices with small displays. In the event-based approach, both the visual and aural information are exploited to characterize semantic video events that can be used to benefit the user’s navigation in hours-long home videos. The two systems can be viewed as the technical realization of the proposed adaptation framework and demonstrate the effectiveness of automatic high-level semantics analysis.
Subjects
Multimedia Content Adaptation
Semantic Analysis
Video Object Detection
Video Event Detection
Type
thesis
File(s)![Thumbnail Image]()
Loading...
Name
ntu-97-D93944001-1.pdf
Size
23.32 KB
Format
Adobe PDF
Checksum
(MD5):9c3900d5f53488cdfd364e871795dc91
