2D sparse dictionary learning via tensor decomposition
Journal
2014 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2014
Pages
492-496
Date Issued
2014
Author(s)
Abstract
The existing dictionary learning methods mostly focus on ID signals, leading to the disadvantage of incurring overload of memory and computation if the size of training samples is large enough. Recently, 2D dictionary learning paradigm has been validated to save massive memory usage, especially for large-scale problems. To address this issue, we propose novel 2D dictionary learning algorithms based on tensors in this paper. Our learning problem is efficiently solved by CANDECOMP/PARAFAC (CP) decomposition. In addition, our algorithms guarantee sparsity constraint, which makes that sparse representation of the learned dictionary is equivalent to the ground truth. Experimental results confirm the effectness of our methods. © 2014 IEEE.
Subjects
CANDECOMP/PARAFAC (CP) decomposition; Dictionary learning; Sparse representation; Tensor
Other Subjects
Information science; Tensors; CANDECOMP/PARAFAC; Dictionary learning; Dictionary learning algorithms; Large-scale problem; Learned dictionaries; Sparse representation; Sparsity constraints; Tensor decomposition; Learning algorithms
Type
conference paper
