Convolutional network, in which all neurons within a pool receive the same input weights, are ubiquitous in deep learning. They provide a good inductive bias for images, which reduces the number of parameters, reduces training time, and increases accuracy. However, as a model of the brain they are seriously problematic, as they're not biologically plausible. That's because they require weight sharing, since every neuron within a pool has the same weights. But that's something neurons can't do, as they have no way of sharing either their weights or their weight updates to other neurons. Locally connected, but non-convolutional, networks avoid weight sharing. However, they significantly underperform convolutional ones. This discrepancy is troublesome for studies that use convolutional networks to explain activity in the visual system. Here we study plausible alternatives to weight sharing that aim at the same regularization principle, which is to make each neuron within a pool react similarly to identical inputs. The most natural way to do that is by showing the network multiple translations of the same image, akin to saccades in animal vision. However, this approach requires many translations, and doesn't remove the performance gap. We propose instead to add lateral connectivity to a locally connected network, and allow learning via Hebbian plasticity. This requires the network to pause occasionally for a sleep-like phase of ``weight sharing''. Our method implements a convolution-like inductive bias, enabling locally connected networks to achieve nearly convolutional performance on ImageNet. This work thus supports convolutional networks as a model of the visual stream.