Recent works have shown remarkable progress in training artificial agents to understand natural language but are focused on using large amounts of raw data that requires huge compute and memory requirements. Another line of work follows upon the idea of training artificial agents via multi-agent communication while using small amounts of task-specific human data to ground the emergent language into natural language. This allows agents to communicate with humans without needing enormous expensive human demonstrations. Evolutionary studies have showed that simpler and easily adaptable languages arise as a result of communicating with a diverse group of large population. We propose to model this supposition with artificial agents and propose an adaptive population-based meta-reinforcement learning approach that builds such a population in an iterative manner. We show empirical results on two referential games involving natural language where our agents outperform all baselines on both the task performance and language score. We demonstrate that our method induces constructive diversity into a growing population of agents that is beneficial in training the meta-agent. We also show that our method results in an improved captioning and translation model given just a few samples. Furthermore, we perform human evaluation with our trained agents and show that humans are able to speak as well as listen to these agents while achieving better task performance.