Linear Upper Confidence Bound Algorithm for Contextual Bandit Problem with Piled Rewards.
Journal
Advances in Knowledge Discovery and Data Mining - 20th Pacific-Asia Conference, PAKDD 2016, Auckland, New Zealand, April 19-22, 2016, Proceedings, Part II
Pages
143-155
Date Issued
2016
Author(s)
Huang, Kuan-Hao
Abstract
We study the contextual bandit problem with linear payoff function. In the traditional contextual bandit problem, the algorithm iteratively chooses an action based on the observed context, and immediately receives a reward for the chosen action. Motivated by a practical need in many applications, we study the design of algorithms under the piled-reward setting, where the rewards are received as a pile instead of immediately. We present how the Linear Upper Confidence Bound (Lin- UCB) algorithm for the traditional problem can be naïvely applied under the piled-reward setting, and prove its regret bound. Then, we extend LinUCB to a novel algorithm, called Linear Upper Confidence Bound with Pseudo Reward (LinUCBPR), which digests the observed contexts to choose actionsmore strategically before the piled rewards are received.We prove that LinUCBPR can match LinUCB in the regret bound under the piled-reward setting. Experiments on the artificial and real-world datasets demonstrate the strong performance of LinUCBPR in practice. © Springer International Publishing Switzerland 2016.
Subjects
Bandit; Piled rewards; Upper confidence bound
Other Subjects
Iterative methods; Piles; Probability; Bandit; Contextual bandits; Design of algorithms; Novel algorithm; Payoff function; Piled rewards; Real-world datasets; Upper confidence bound; Data mining
Type
conference paper
