Strong inductive biases are a key component of human intelligence, allowing people to quickly learn a variety of tasks. Although meta-learning has emerged as an approach for endowing neural networks with useful inductive biases, agents trained via meta-learning may learn very different strategies from humans. We show that co-training these agents on predicting representations from natural language task descriptions and from programs induced to generate such tasks guides agents toward human-like inductive biases. Human-generated language descriptions and program induction with library learning both result in more human-like inductive biases in downstream meta-reinforcement learning agents than less abstract controls (synthetic language descriptions, program induction without library learning), suggesting that the abstraction supported by these representations is key. This work shows that natural language and programs can be used as repositories of human-like inductive bias, and demonstrates a general and flexible approach to inducing these biases in artificial agents.