A Provably Efficient Sample Collection Strategy for Reinforcement Learning

One of the challenges in \textit{online} reinforcement learning (RL) is that the agent needs to trade off the exploration of the environment and the exploitation of the samples to optimize its behavior. Whether we optimize for regret, sample complexity, state-space coverage or model estimation, we need to strike a different exploration-exploitation trade-off. In this paper, we propose an exploration-exploitation approach that is decoupled in two parts: \textbf{1)} An objective-specific'' algorithm that (adaptively) prescribes \textit{how many} samples to collect \textit{at which} states, as if it has access to a generative model (i.e., a simulator of the environment); \textbf{2)} Anobjective-agnostic'' sample collection exploration strategy responsible to generate the prescribed samples as fast as possible. Building on recent methods for exploration in stochastic shortest path, we first provide an algorithm that, given any $b(s,a)$ sample requirements for each state-action pair, requires $\wt{O}\left( B D + D^{3/2} S^2 A \right)$ time steps to collect the $B=\sum_{s,a} b(s,a)$ desired samples, in any unknown communicating MDP with $S$ states, $A$ actions and diameter~$D$. Then we show how this general-purpose exploration algorithm can be paired with ``objective-specific'' strategies that prescribe the sample requirements to tackle a variety of settings --- e.g., model estimation, sparse reward discovery, goal-free cost-free exploration in communicating MDPs --- for which we obtain improved or novel sample complexity guarantees.

Authors' notes