Reinforcement Learning (RL) provides an elegant formalization for the problem of intelligence.
In combination with advances in deep learning and increases in computation, this formalization has resulted in powerful solutions to longstanding artificial intelligence challenges — e.g. playing Go at a championship level. We believe it also offers an avenue for solving some of our greatest challenges: from drug design to industrial and space robotics, or improving energy efficiency in a variety of applications.
However, in this pursuit, the scale and complexity of RL programs has grown dramatically over time. This has made it increasingly difficult for researchers to rapidly prototype ideas, and has caused serious reproducibility issues. To address this, we are launching Acme — a tool to increase reproducibility in RL and simplify the ability of researchers to develop novel and creative algorithms.
Acme is a framework for building readable, efficient, research-oriented RL algorithms. At its core Acme is designed to enable simple descriptions of RL agents that can be run at various scales of execution — including distributed agents. By releasing Acme, our aim is to make the results of various RL algorithms developed in academia and industrial labs easier to reproduce and extend for the machine learning community at large. See the GitHub repository here.