https://scholars.lib.ntu.edu.tw/handle/123456789/581259
Title: | Audiovisual Transformer with Instance Attention for Audio-Visual Event Localization | Authors: | Lin Y.-B Wang Y.-C.F. YU-CHIANG WANG |
Keywords: | Audiovisual; Deep learning; Audio features; Audio information; Benchmark datasets; Cross modality; Event localizations; Learning frameworks; Network modeling; Visual information; Computer vision | Issue Date: | 2021 | Journal Volume: | 12627 LNCS | Start page/Pages: | 274-290 | Source: | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Abstract: | Audio-visual event localization requires one to identify the event label across video frames by jointly observing visual and audio information. To address this task, we propose a deep learning framework of cross-modality co-attention for video event localization. Our proposed audiovisual transformer (AV-transformer) is able to exploit intra and inter-frame visual information, with audio features jointly observed to perform co-attention over the above three modalities. With visual, temporal, and audio information observed across consecutive video frames, our model achieves promising capability in extracting informative spatial/temporal features for improved event localization. Moreover, our model is able to produce instance-level attention, which would identify image regions at the instance level which are associated with the sound/event of interest. Experiments on a benchmark dataset confirm the effectiveness of our proposed framework, with ablation studies performed to verify the design of our propose network model. ? 2021, Springer Nature Switzerland AG. |
URI: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85103251788&doi=10.1007%2f978-3-030-69544-6_17&partnerID=40&md5=6546863b7b373c56a9156850d979cfb3 https://scholars.lib.ntu.edu.tw/handle/123456789/581259 |
ISSN: | 03029743 | DOI: | 10.1007/978-3-030-69544-6_17 |
Appears in Collections: | 電機工程學系 |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.