Many environments contain numerous available niches of variable value, each associated with a different local optimum in the space of behaviors (policy space). In such situations, it is often difficult to design a learning process capable of evading distraction by poor local optima long enough to stumble upon the best available niche. In this work we propose a generic reinforcement learning (RL) algorithm that performs better than baseline deep Q-learning algorithms in such environments with multiple variably-valued niches. The algorithm we propose consists of two parts: an agent architecture and a learning rule. The agent architecture contains multiple sub-policies. The learning rule, inspired by the ecological principle of competitive exclusion, can be understood as adding an extra loss term where one policy's experience is also used to update all the other policies in a manner that decreases their value estimates for the visited states. Thus when a sub-policy visits a particular state frequently it discourages other sub-policies from learning to visit that state, if they have other alternatives. Further, we introduce an artificial chemistry inspired platform for defining tasks based on reaction graphs, where it is easy to create tasks with multiple rewarding strategies utilizing different resources (i.e. multiple niches). We show that agents trained this way can escape poor-but-attractive local optima to instead converge to harder to discover higher value strategies in both the artificial chemistry environments and in simpler illustrative environments.