We study the problem of cooperative multi-agent reinforcement learning with a
single joint reward signal. This class of learning problems is difficult because of
the often large combined action and observation spaces. In the fully centralized
and decentralized approaches, we find the problem of spurious rewards and a
phenomenon we call the “lazy agent” problem, which arises due to partial observability.
We address these problems by training individual agents with a novel value
decomposition network architecture, which learns to decompose the team value
function into agent-wise value functions. We perform an experimental evaluation
across a range of partially-observable multi-agent domains and show that learning
such value-decompositions leads to superior results, in particular when combined
with weight sharing, role information and information channels.