Machine unlearning considers the removal of the contribution of a set of data points from a trained model. In a distributed setting, where a server orchestrates training using data available at a set of remote users, unlearning is essential to cope with late-detected malicious or corrupted users. Existing distributed unlearning algorithms require the server to store all model updates observed in training, leading to immense storage overhead for preserving the ability to unlearn. In this work we study lossy compression schemes for facilitating distributed server-side unlearning with limited memory footprint. We propose memory-efficient distributed unlearning (MEDU), a hierarchical lossy compression scheme tailored for server-side unlearning, that integrates user sparsification, differential thresholding, and random lattice coding, to substantially reduce memory footprint. We rigorously analyze MEDU, deriving an upper bound on the difference between the desired model that is trained from scratch and the model unlearned from lossy compressed stored updates. Our bound outperforms the state-of-the-art known bounds for non-compressed decentralized server-side unlearning, even when lossy compression is incorporated. We further provide a numerical study, which shows that suited lossy compression can enable distributed unlearning with notably reduced memory footprint at the server while preserving the utility of the unlearned model.