We investigate how activation functions can be used to describe neural firing in an abstract way, and in turn, why they work well in artificial neural networks. We discuss how a spike in a biological neurone belongs to a particular universality class of phase transitions in statistical physics. We then show that the artificial neurone is, mathematically, a mean field model of biological neural membrane dynamics, which arises from modelling spiking as a phase transition. This allows us to treat selective neural firing in an abstract way, and formalise the role of the activation function in perceptron learning. Along with deriving this model and specifying the analogous neural case, we analyse the phase transition to understand the physics of neural network learning. Together, it is show that there is not only a biological meaning, but a physical justification, for the emergence and performance of canonical activation functions; implications for neural learning and inference are also discussed.