Learned coarse models for efficient turbulence simulation

Recent machine learning advances in simulation invite the question: to what extent can learned simulators supplement or replace traditional simulators for scientific applications? Here we address this question for astrophysical turbulence using four chaotic and turbulent domains: three from astrophysics involving decaying turbulence and radiative cooling mixing layers, as well as the classic Kuramoto-Sivashinsky equation. Simulating these complex, chaotic systems with traditional numerical solvers is computationally costly because fine grids are needed to accurately resolve dynamics. We implement a variety of convolutional neural network-based simulators, including a novel Dilated ResNet model, and find that learned models can outperform traditional solvers with comparable resolution across various scientifically relevant metrics, most notably preserving high-frequency information. We find that tuning training noise and temporal downsampling can improve rollout stability, and see that while generalization beyond the training distribution is a challenge for learned models, training noise, convolutional architecture, and added loss constraints can help. To our knowledge, models are the first learned simulators evaluated for astrophysical applications, and the first to be trained on data from the \texttt{Athena++} engine. Broadly, we conclude that learned simulators are beginning to be competitive with traditional solvers run on coarser grids, and emphasize that careful design choices can offer robust generalization.

Authors' notes