Benchmarks for Deep Off-Policy Evaluation

Off-policy evaluation (OPE) for deep neural network policies holds the promise of being able to leverage large, offline datasets for both obtaining and selecting complex policies for decision making. The ability to perform evaluation offline is particularly important in many real-world domains, such as healthcare, recommender systems, or robotics, where online data collection is an expensive and potentially dangerous process. The ability to accurately evaluate and select high-performing policies without requiring online interaction could yield significant benefits in safety, time, and cost for these applications. While many OPE methods have been proposed in recent years, comparing results between works is difficult because there is currently a lack of a comprehensive and unified benchmark. Moreover, it is difficult to measure how far algorithms have progressed, due to the lack of challenging, realistic evaluation tasks. In order to address this gap, we introduce DOPE -- a benchmark for deep, off-policy evaluation which includes tasks on a range of realistic, high-dimensional control problems. The goal of policies unplugged to provide a standardized measure of progress that is motivated from a set of principles designed to challenge and test the limits of existing OPE methods. We perform a comprehensive evaluation of state-of-the-art algorithms, and we will provide open-source access to all data and code to foster future research in this area.

Authors' notes