Repository logo
  • English
  • 中文
Log In
Have you forgotten your password?
  1. Home
  2. College of Electrical Engineering and Computer Science / 電機資訊學院
  3. Electrical Engineering / 電機工程學系
  4. Distributed Classification of Asynchronous Partial Model for Non-regular Drifting Data
 
  • Details

Distributed Classification of Asynchronous Partial Model for Non-regular Drifting Data

Date Issued
2015
Date
2015
Author(s)
Chen, Yu-Fen
URI
http://ntur.lib.ntu.edu.tw//handle/246246/276606
Abstract
Big Data emphasizes on 5Vs (Volume, Velocity, Variety, Value and Veracity) relevant to variety of data (scientific and engineering, social network, sensor/IoT/IoE, and multimedia-audio, video, image, etc) that contribute to the Big Data challenges. This phenomenon introduces the urgent requirement for efficiently managing data to structured information. One predominate approach is distributed classification ensemble, which improve prediction efficiency by using ensemble of distributed model or integrated by combining distributed information via statistics, to allow multiple devices collect data concurrently. With the popularity of Big Data applications, wireless and mobile technology, the amount of data in various characteristics generated by distributed devices has been tremendously increasing. As a result, distributed classification in Big Data has new challenges. There are three main challenges in distributed big data systems: 1) the distributed classification models are asynchronous and incomplete from distributed devices. Traditional distributed classification algorithms, which rely on horizontal sub-databases or vertical sub-databases, cannot be applied in this scenario. 2) Due to various characteristics of Big Data, simply separating data to equal size for constructing models takes away the significant performance benefit of classification models. In particular, non-regular recurring data are especially vulnerable to models derived from equally separated windows because noise data interfere most of the models in fixed-size buckets. 3) In our distributed environment, arbitrarily transforming popular lazy models to rules will increase the diversity of local models and reduce additional transmission bandwidth consumption. This dissertation tries to solve the above problems. First, this dissertation focuses on distributed streaming environment scenario, and proposes a rule-based distributed classification for asynchronous partial data (DIP). Our proposed method DIP selects models based on the amount of local databases and the quality of local models such that the performance gain can be fully utilized. DIP saves the communication bandwidth by transferring organized information, instead of individual instances. In addition, our distributed classification method DIP enables local devices collect various amount of local data. Due to data diversity and change diversification, the performance of classification models built from fixed-size windows or chunks declines. We investigate the data characteristic of non-regular data and introduce sequential clustering which adaptively forms sequential clusters of data based on data distributions and time to reduce noise data of inter-cluster interference and enhance classification prediction of derived models. Finally, this dissertation proposes two model transformation methods, which transforms data distributions to rules, to facilitate popular lazy classifiers in our distributed classifier. In both theory analysis and tested experiments, the proposed distributed classification framework can achieve a significant performance gain and bigger scope, as compared to the traditional distributed classification ensemble and existing dynamically changing methods.
Subjects
distributed classification
asynchronous partial data
non-regular recurring concept data
sequential clustering
Type
thesis
File(s)
Loading...
Thumbnail Image
Name

ntu-104-Q90921009-1.pdf

Size

23.32 KB

Format

Adobe PDF

Checksum

(MD5):0f16792264662c228499531d30945312

臺大位居世界頂尖大學之列,為永久珍藏及向國際展現本校豐碩的研究成果及學術能量,圖書館整合機構典藏(NTUR)與學術庫(AH)不同功能平台,成為臺大學術典藏NTU scholars。期能整合研究能量、促進交流合作、保存學術產出、推廣研究成果。

To permanently archive and promote researcher profiles and scholarly works, Library integrates the services of “NTU Repository” with “Academic Hub” to form NTU Scholars.

總館學科館員 (Main Library)
醫學圖書館學科館員 (Medical Library)
社會科學院辜振甫紀念圖書館學科館員 (Social Sciences Library)

開放取用是從使用者角度提升資訊取用性的社會運動,應用在學術研究上是透過將研究著作公開供使用者自由取閱,以促進學術傳播及因應期刊訂購費用逐年攀升。同時可加速研究發展、提升研究影響力,NTU Scholars即為本校的開放取用典藏(OA Archive)平台。(點選深入了解OA)

  • 請確認所上傳的全文是原創的內容,若該文件包含部分內容的版權非匯入者所有,或由第三方贊助與合作完成,請確認該版權所有者及第三方同意提供此授權。
    Please represent that the submission is your original work, and that you have the right to grant the rights to upload.
  • 若欲上傳已出版的全文電子檔,可使用Open policy finder網站查詢,以確認出版單位之版權政策。
    Please use Open policy finder to find a summary of permissions that are normally given as part of each publisher's copyright transfer agreement.
  • 網站簡介 (Quickstart Guide)
  • 使用手冊 (Instruction Manual)
  • 線上預約服務 (Booking Service)
  • 方案一:臺灣大學計算機中心帳號登入
    (With C&INC Email Account)
  • 方案二:ORCID帳號登入 (With ORCID)
  • 方案一:定期更新ORCID者,以ID匯入 (Search for identifier (ORCID))
  • 方案二:自行建檔 (Default mode Submission)
  • 方案三:學科館員協助匯入 (Email worklist to subject librarians)

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science