SAKURA: On the Multi-hop Reasoning of Large Audio-Language Models Based on Speech and Audio Information
Journal
Interspeech 2025
Series/Report No.
Proceedings of the Annual Conference of the International Speech Communication Association Interspeech
Start Page
1788
End Page
1792
ISSN
2308457X
Date Issued
2025-08-17
Author(s)
Abstract
Large audio-language models (LALMs) extend the large language models with multimodal understanding in speech, audio, etc. While their performances on speech and audio-processing tasks are extensively studied, their reasoning abilities remain underexplored. Particularly, their multi-hop reasoning, the ability to recall and integrate multiple facts, lacks systematic evaluation. Existing benchmarks focus on general speech and audio-processing tasks, conversational abilities, and fairness, but overlook this aspect. To bridge this gap, we introduce SAKURA, a benchmark assessing LALMs' multi-hop reasoning based on speech and audio information. Results show that LALMs struggle to integrate speech/audio representations for multi-hop reasoning, even when they extract the relevant information correctly, highlighting a fundamental challenge in multimodal reasoning. Our findings expose a critical limitation in LALMs, offering insights and resources for future research.
Event(s)
26th Interspeech Conference 2025
Subjects
benchmark
Large audio-language model
multi-hop reasoning
SDGs
Publisher
ISCA
Type
conference paper
