Learning key steps to attack deep reinforcement learning agents
Journal
Machine Learning
Date Issued
2023-01-01
Author(s)
Abstract
Deep reinforcement learning agents are vulnerable to adversarial attacks. In particular, recent studies have shown that attacking a few key steps can effectively decrease the agent’s cumulative reward. However, all existing attacking methods define those key steps with human-designed heuristics, and it is not clear how more effective key steps can be identified. This paper introduces a novel reinforcement learning framework that learns key steps through interacting with the agent. The proposed framework does not require any human heuristics nor knowledge, and can be flexibly coupled with any white-box or black-box adversarial attack scenarios. Experiments on benchmark Atari games across different scenarios demonstrate that the proposed framework is superior to existing methods for identifying effective key steps. The results highlight the weakness of RL agents even under budgeted attacks.
Subjects
Adversarial attacks | Deep learning | Reinforcement learning | Robustness
Type
journal article