Paper
Publication
Your Policy Regularizer is Secretly an Adversary
Abstract

Policy regularisation methods such as maximum entropy regularisation are widely used in reinforcement learning to improve the robustness of a learned policy. In this paper, we show how this robustness arises from hedging against worst-case perturbations of the reward function, which are chosen from a limited set by an imagined adversary. Using convex duality, we characterise this robust set of adversarial reward perturbations under KL and alpha-divergence regularisation, which includes Shannon and Tsallis entropy regularisation as special cases. Importantly, generalisation guarantees can be given within this robust set. We provide detailed discussion of the worst-case reward perturbations, and present intuitive empirical examples to illustrate this robustness and its relationship with generalisation. Finally, we discuss how our analysis complements and extends previous results on adversarial reward robustness and path consistency optimality conditions.

Authors' notes