State-of-the-art parameter-efficient fine-tuning methods rely on introducing adapter modules between the layers of a pretrained language model. However, such modules are trained separately for each task and thus do not enable sharing information across tasks. In addition, the number of parameters of such methods scales with the number of tasks and layers, hindering their efficiency in real-world settings. In this paper, we show that we can learn adapter parameters for all layers and tasks via shared hypernetworks by conditioning on task, adapter position, and layer id in a transformer model. This parameter-efficient multi-task learning framework allows us to achieve the best of both worlds by sharing knowledge across tasks via hypernetworks while enabling the model to adapt to each individual task through task-specific adapters. Experiments on the well-known GLUE benchmark show improved performance in multi-task learning while adding only $0.29\%$ parameters per task. We additionally demonstrate substantial performance improvements in low-resource scenarios and few-shot domain generalization.