We provide a counterexample to the minimal unimodal conjecture for polynomial neural networks (PNNs) with power activation functions. Fixing the input and output widths, the conjecture states that any minimal filling architecture has unimodal widths for the hidden layers. We found a counterexample via a frontier search and certified it using recursive dimension bounds and symbolic computation. Notably, several subarchitectures of this example exhibit large defect, in contrast with the predominantly small-defect behavior observed in prior examples.