Yeh, Sing YuanSing YuanYehChang, Fu ChiehFu ChiehChangYueh, Chang WeiChang WeiYuehPEI-YUAN WUBernacchia, AlbertoAlbertoBernacchiaVakili, SattarSattarVakili2023-09-012023-09-012023-01-01https://scholars.lib.ntu.edu.tw/handle/123456789/634889Modern reinforcement learning (RL) often faces an enormous state-action space. Existing analytical results are typically for settings with a small number of state-actions, or simple models such as linearly modeled Q-functions. To derive statistically efficient RL policies handling large state-action spaces, with more general Q-functions, some recent works have considered nonlinear function approximation using kernel ridge regression. In this work, we derive sample complexities for kernel based Q-learning when a generative model exists. We propose a nonparametric Q-learning algorithm which finds an ϵ-optimal policy in an arbitrarily large scale discounted MDP. The sample complexity of the proposed algorithm is order optimal with respect to ϵ and the complexity of the kernel (in terms of its information gain). To the best of our knowledge, this is the first result showing a finite sample complexity under such a general model.Sample Complexity of Kernel-Based Q-Learningconference paper2-s2.0-85165172444https://api.elsevier.com/content/abstract/scopus_id/85165172444