Learning to combine control at the level of joint torques with longer term goal-directed behavior is a long-standing challenge for physically embodied artificial agents. Intelligent behavior in the physical world unfolds across multiple spatial and temporal scales: although movements are ultimately executed at the level of instantaneous muscle tensions or joint torques, they must be selected to serve goals which are defined on much longer timescales and often involve complex interactions with the environment and other agents. Recent research has demonstrated the potential of learning-based approaches applied to the respective problems of complex movement, long-term planning, and multi-agent coordination. However, their integration remains challenging and traditionally required the design and optimization of independent sub-systems. In this work, we tackle the integration of motor control and long-horizon decision making, in the context of physically simulated humanoid football which requires agile, human-like motor control and multi-agent coordination. We optimize teams of agents to play simulated football via reinforcement learning, constraining the solution space to that of plausible human-like movements using motion capture data. They are trained to maximize several environment rewards, and imitate pre-trained football-specific skills if doing so leads to improved performance. The result is a team of coordinated humanoid football players that exhibit complex behavior at different scales, quantified by a range of analysis and statistics, including those used in real-world sports analytics. Our work constitutes a complete demonstration of learned integrated decision-making at multiple scales in a physically embodied multi-agent setting.