Exploiting Reinforcement-Learning for Influence Maximization without Human-Annotated Data
Date Issued
2015
Date
2015
Author(s)
Huang, Yen-Hua
Abstract
Strategies to choose nodes on a social network to maximize the total influence has been studied for decades. Studies have shown that the greedy algorithm is a competitive strategy and it has been proved to cover at least 63% of the optimal spread. Here we propose a learning-based framework for influence maximization aiming at outperforming the greedy algorithm in terms of both coverage and efficiency. The proposed reinforcement learning framework combining with a classification model not only alleviates the requirement of the labelled training data, but also allows the influence maximization strategy to be developed gradually and eventually outperforms a basic greedy approach.
Subjects
Reinforcement-Learning
Social network
Influence Maximization
Machine learning
Greedy Algorithm
Type
thesis
File(s)![Thumbnail Image]()
Loading...
Name
ntu-104-R02944055-1.pdf
Size
23.32 KB
Format
Adobe PDF
Checksum
(MD5):fd6cc1c8cb4f3cd4582a5e29d408cd19
