Post-Training Quantization for Vision Mamba with K-Scaled Quantization and Reparameterization
Journal
IEEE International Workshop on Machine Learning for Signal Processing, MLSP
Start Page
1
End Page
6
ISBN (of the container)
979-833157029-3
ISBN
[9798331570293]
Date Issued
2025-08-31
Author(s)
Abstract
The Mamba model, utilizing a structured state-space model (SSM), offers linear time complexity and demonstrates significant potential. Vision Mamba (ViM) extends this framework to vision tasks, surpassing Transformer-based models in performance. While model quantization is essential for efficient computing, existing works have focused solely on the original Mamba model and have not been applied to ViM. Additionally, they neglect quantizing the SSM layer, which is central to Mamba and can lead to substantial error propagation by naive quantization due to its inherent structure. In this paper, we focus on the post-training quantization (PTQ) of ViM. We address the issues with three core techniques: 1) a kscaled token-wise quantization method for linear layers, 2) a reparameterization technique to simplify hidden state quantization, and 3) a factor-determining method that reduces computational overhead by integrating operations. Experimental results on ImageNet-1k demonstrate only a 0.8-1.2% accuracy degradation due to PTQ, highlighting the effectiveness of our approach.
Event(s)
35th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2025
Subjects
Mamba
PTQ
VisionMamba
Publisher
IEEE Computer Society
Type
conference paper
