Here we introduce a collection of 17 datasets that test the ability of models to learn about dynamics from images. The release includes datasets of increasing complexity, spanning real physical systems, learning dynamics in cyclic games, and camera movements in a 3D room. The goal of our dataset suite is to investigate how well a recently developed class of models incorporating priors from classical mechanics can learn the underlying dynamics of the systems from only observing sequences of images. Our tasks include classical toy physical systems that obey the energy preservation principle, as well as their variations that test how well the models can deal with nuisance changes in attributes like colour or position that do not affect the underlying dynamics. We also include some of these datasets with modified dynamics where friction is added, and datasets with molecular dynamics, which are more complex due to the presence of many particles. Alongside the physical systems, our suite also includes datasets of multi-agent learning dynamics, which exhibit behavior similar to systems from classical mechanics and which are related to the learning dynamics in GANs. Finally, we also include datasets of camera movements in a 3D room, which test the ability of models to deal with significantly more complicated visuals than the other datasets. All our datasets contain long trajectories (256-1000 steps) and include high-dimensional pixel observations, ground truth state, and any auxiliary variables that were used to generate the trajectories (e.g. the values of the Hamiltonian constants). We hope that the tasks in our suite span a good range of visual and dynamical complexity, so that the community finds them useful for tracking progress in the field.
Together with the pre-generated datasets, we are also open sourcing the code used for generating them and code for computing the SyMetric - our newly proposed approach to measure the quality of the learnt dynamics in models with the Hamiltonian priors that learn from pixels. We also release the code for implementing the main classes of models that include strong priors from classical mechanics when learning from pixels (the Hamiltonian and Lagrangian generative networks), models with weaker physical priors (Neural ODE and its discretised alternative - the recurrent generative network), and models with no priors (RNN, LSTM and GRU), as described and benchmarked in our recent paper.
For more information please checkout the GitHub repositories for the benchmarks and dataset generation. Pre-generated datasets are available to download here.