https://scholars.lib.ntu.edu.tw/handle/123456789/632310
標題: | DET: Depth-Enhanced Tracker to Mitigate Severe Occlusion and Homogeneous Appearance Problems for Indoor Multiple-Object Tracking | 作者: | Liu C.-J TSUNG-NAN LIN |
關鍵字: | Autonomous vehicles; Benchmark testing; Cameras; Feature extraction; Target tracking; Tracking; Video sequences | 公開日期: | 2022 | 卷: | 10 | 起(迄)頁: | 8287-8304 | 來源出版物: | IEEE Access | 摘要: | Multiple-object tracking has long been a topic of interest since it plays an important role in many computer vision applications. Existing works are mostly designed for outdoor tracking, such as video surveillance and autonomous driving. However, the behaviors of objects in outdoor tracking scenarios do not fully reflect the tracking challenges in indoor tracking environments. In outdoor tracking scenarios, pedestrians and vehicles usually move uniformly from place to place on a simple straight path, and target appearances are usually different. In contrast, in indoor scenarios, such as choreographed performances, the dynamic behaviors of dancers lead to severe occlusions, and similar costumes present a homogeneous appearance problem. These severe occlusion and homogeneous appearance problems in indoor tracking lead to noticeable degradation in the performance of existing works. In this paper, we propose a depth-enhanced tracking-by-detection framework and a semantic matching strategy combined with a scene-aware affinity measurement method to mitigate occlusion and homogeneous appearance problems significantly. In addition, we introduce an indoor tracking dataset and increase the diversity of existing benchmark datasets for indoor tracking evaluation. We conduct experiments on both the proposed indoor tracking dataset and the latest MOT benchmarks, MOT17 and MOT20. The experimental results show that our method consistently outperforms other works on the convincing HOTA metric across the benchmarks and greatly reduces the number of identity switches by 20% compared to that of the second-best tracker, DeepSORT, in our proposed indoor MOT benchmark dataset. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85123361737&doi=10.1109%2fACCESS.2022.3144153&partnerID=40&md5=8766c2fef219ab16ae2f99cff4ca0b05 https://scholars.lib.ntu.edu.tw/handle/123456789/632310 |
ISSN: | 21693536 | DOI: | 10.1109/ACCESS.2022.3144153 | SDG/關鍵字: | Benchmarking; Feature extraction; Security systems; Semantics; Statistical tests; Target tracking; Affinity measurement; Autonomous Vehicles; Benchmark testing; Data association; Depth Estimation; Features extraction; Indoor tracking; Indoor tracking dataset; Multiple object tracking; Targets tracking; Tracking; Video sequences; Computer vision |
顯示於: | 電機工程學系 |
在 IR 系統中的文件,除了特別指名其著作權條款之外,均受到著作權保護,並且保留所有的權利。