Perceiving the world's state via high-dimensional observations creates considerable difficulty for understanding, acting and exploring in an environment. A common procedure is the transformation of complex data into more low-dimensional, structured representations to improve learning for reinforcement learning agents. While regularly used for input transformations, agents can apply representations in multiple ways and different use-cases can require different properties. Here, we investigate two use-cases for how representation learning can enable us to structure agent behaviour: affecting the agent's input and output respectively by transforming its observations and by generating tasks for exploration. We considerably accelerate agent training and learn to solve complex robot manipulation tasks in simulation. In addition to performance benefits for the agent, we also investigate benefits regarding system design and engineering. Our experiments show that, while some representation properties are prosperous for multiple use cases, others differ in their effect.