Researchers use statistical physics and "toy models" to explain how neural networks avoid overfitting and stabilize learning in high-dimensional spaces.