Options
A Fast Parallel Stochastic Gradient Method for Matrix Factorization in Shared Memory Systems
Date Issued
2014
Date
2014
Author(s)
Juan, Yu-Chin
Abstract
Matrix factorization is known to be an effective method for recommender systems that are given only the ratings from users to items.
Currently, stochastic gradient (SG) method is one of the most popular algorithms for matrix factorization.
However, as a sequential approach, SG is difficult to be parallelized for handling web-scale problems.
In this thesis, we develop a fast parallel SG method, FPSG, for shared memory systems.
By dramatically reducing the cache-miss rate and carefully addressing the load balance of threads, FPSG is more efficient than state-of-the-art parallel algorithms for matrix factorization.
Currently, stochastic gradient (SG) method is one of the most popular algorithms for matrix factorization.
However, as a sequential approach, SG is difficult to be parallelized for handling web-scale problems.
In this thesis, we develop a fast parallel SG method, FPSG, for shared memory systems.
By dramatically reducing the cache-miss rate and carefully addressing the load balance of threads, FPSG is more efficient than state-of-the-art parallel algorithms for matrix factorization.
Subjects
推薦系統
矩陣分解
隨機梯度下降法
平行計算
共享記憶體演算法
Type
thesis
File(s)
No Thumbnail Available
Name
ntu-103-R01922136-1.pdf
Size
23.32 KB
Format
Adobe PDF
Checksum
(MD5):c4a8686546102c8aed59ac2de99ff07f