Researchers are now using deep learning models to explore the emergence of language in various types of language games, where simulated agents interact and develop an emergent language to solve a task. Although it is quite intuitive that different types of language game posing different communicative challenges might require emergent languages which encode different levels of information, there is no existing work exploring the expressivity of the emergent languages. In this work, we propose the definition of expressivity based on the generalisation performance across different language games, and validate the hypothesis that the expressivity of emergent languages is a trade-off between the complexity and unpredictability of the language games those languages are used in. Our second novel contribution is the introduction of a contrastive loss into the implementation of an emergent communication learning system. We show that using contrastive loss alleviates the collapse of message types seen using standard referential loss functions.