Graph Neural Networks (GNNs) have been proven effective across a wide range of molecular property prediction and structured learning problems. However, their efficiency is known to be hindered by practical challenges such as oversmoothing. We introduce ``Noisy Nodes'', a very simple technique for improved training of GNNs, in which we corrupt the input graph with noise, and add a noise correcting node-level loss. Adding noise helps overfitting, and the noise correction loss helps ameliorate oversmoothing by encouraging diverse node latents. We demonstrate that Noisy Nodes is compatible with multiple architectures, particularly very deep GNNs, and achieve state of the art on several challenging molecular prediction tasks. Our regulariser applies well-studied methods in simple, straightforward ways which allows even generic architectures not designed for quantum chemistry to achieve competitive results on molecular tasks. Our results suggest it can serve as a complementary building block in the GNN toolkit for 3D molecular property prediction and beyond.