Quantum-Train: rethinking hybrid quantum-classical machine learning in the model compression perspective
Journal
Quantum Machine Intelligence
Journal Volume
7
Journal Issue
2
Start Page
80
ISSN
2524-4906
2524-4914
Date Issued
2025-08-05
Author(s)
Kuo, En-Jui
Abraham Lin, Chu-Hsuan
Gemsun Young, Jason
Chang, Yeong-Jar
Hsieh, Min-Hsiu
Abstract
We propose Quantum-Train (QT), a hybrid quantum-classical framework for training neural networks that reduces trainable parameter complexity while maintaining competitive performance. QT addresses/avoids three critical challenges in quantum machine learning (QML): (1) the cost of data encoding into quantum circuits, (2) the large parameter footprint of classical models, and (3) the quantum resource demands during inference. The method leverages a parameterized quantum neural network (QNN) to generate classical model weights via a learnable mapping model, achieving compression from M parameters to O(polylog(M)) under polynomial-depth QNNs. We provide a theoretical approximation bound quantifying QT’s expressivity and empirically validate the framework on classification tasks, showing that QT achieves competitive accuracy using significantly fewer training parameters. Beyond MLPs, we introduce tensor network-based mappings for enhanced parameter efficiency and extend QT to generate LoRA modules (QT-LoRA), demonstrating improved performance over standard LoRA under low-resource constraints. Furthermore, QT models retain predictive accuracy under hardware noise and finite-shot settings, with gradient analyses showing robustness against barren plateau effects. These findings illustrate QT’s potential as a flexible, scalable, and hardware-friendly paradigm for integrating quantum computation into modern machine learning workflows.
Subjects
Efficient learning
Model compression
Quantum computing
Quantum machine learning
Quantum neural networks
Publisher
Springer Science and Business Media LLC
Type
journal article
