Paper
Publication
Sample-based distributional policy evaluation actor-critic methods
Abstract

Actor-critic algorithms that make use of distributional policy evaluation have been frequently been shown to outperform their non-distributional counterparts on many challenging control tasks. Examples of this behavior include the D4PG and DMPO algorithms as compared to DDPG and MPO, respectively [Barth-Maron et al., 2018; Hoffman et al., 2020]. However, both agents rely on the C51 critic for value estimation.One major drawback of the C51 approach is its requirement of prior knowledge about the minimum and maximum values a policy can attain as well as the number of bins used, which fixes the resolution of the distributional estimate. While the DeepMind control suite of tasks utilizes standardized rewards and episode lengths, thus enabling the entire suite to be solved with a single setting of these hyperparameters, this is not the case in general. In this paper, we introduce an alternative, sample-based loss function that removes this requirement. We empirically evaluate its performance on a broad range of continuous control tasks and demonstrate that not only does our approach eliminate the need for these distributional hyperparameters, it also achieves state-of-the-art performance on a variety of very challenging tasks with which other algorithms struggle (e.g. the humanoid, dog, quadruped, and manipulator domains).

Authors' notes