Abstract:We revisit the classical result of Morris et al.~(AAAI'19) that message-passing graphs neural networks (MPNNs) are equal in their distinguishing power to the Weisfeiler--Leman (WL) isomorphism test. Morris et al.~show their simulation result with ReLU activation function and $O(n)$-dimensional feature vectors, where $n$ is the number of nodes of the graph. By introducing randomness into the architecture, Aamand et al.~(NeurIPS'22) were able to improve this bound to $O(\log n)$-dimensional feature vectors, again for ReLU activation, although at the expense of guaranteeing perfect simulation only with high probability. Recently, Amir et al.~(NeurIPS'23) have shown that for any non-polynomial analytic activation function, it is enough to use just 1-dimensional feature vectors. In this paper, we give a simple proof of the result of Amit et al.~and provide an independent experimental validation of it.