Abstract:Barren-plateau results have established exponential gradient suppression as a widely cited obstacle to the scalability of variational quantum algorithms. When and whether these results extend to a given objective has been addressed through loss-specific arguments, but a general structural characterization has remained open. We show that the objective itself admits a fixed-observable representation if and only if the loss is affine in the measured statistics, thereby identifying the exact boundary of the standard concentration-based proof template. Existing transfer results for non-affine losses achieve this reduction under additional assumptions; our characterization implies that such a reduction is not structurally available for a class of non-affine objectives, placing them outside the automatic reach of the existing proof template. Beyond the affine regime, a chain-rule decomposition reveals three governing factors -- model responsivity, loss-side signal, and transmittance -- and induces a loss-class dichotomy: bounded-gradient losses inherit suppression, while amplification-capable losses can in principle counteract it. In the exponentially wide setting, both classes fail, but for different structural reasons. When the interface is instead designed at polynomial width -- exposing coarse-grained statistics rather than individual bitstring probabilities -- the exponential-dimensional obstruction is relaxed and the dichotomy plays a genuine role. In a numerical demonstration on a charge-conserving quantum system, the amplification-capable objective produces resolved gradients several orders of magnitude larger than affine and inheriting baselines at comparable shot budgets. Over the tested interval, its scaling trend is statistically distinguished from the exponential trend of both alternatives. The boundary is affine; what lies beyond it is a representation-design problem.
Abstract:Qubit-efficient optimization seeks to represent an $N$-variable combinatorial problem within a Hilbert space smaller than $2^N$, using only as much quantum structure as the objective itself requires. Quadratic unconstrained binary optimization (QUBO) problems, for example, depend only on pairwise information -- expectations and correlations between binary variables -- yet standard quantum circuits explore exponentially large state spaces. We recast qubit-efficient optimization as a geometry problem: the minimal representation should match the $O(N^2)$ structure of quadratic objectives. The key insight is that the local-consistency problem -- ensuring that pairwise marginals correspond to a realizable global distribution -- coincides exactly with the Sherali-Adams level-2 polytope $\mathrm{SA}(2)$, the tightest convex relaxation expressible at the two-body level. Previous qubit-efficient approaches enforced this consistency only implicitly. Here we make it explicit: (a) anchoring learning to the $\mathrm{SA}(2)$ geometry, (b) projecting via a differentiable iterative-proportional-fitting (IPF) step, and (c) decoding through a maximum-entropy Gibbs sampler. This yields a logarithmic-width pipeline ($2\lceil\log_2 N\rceil + 2$ qubits) that is classically simulable yet achieves strong empirical performance. On Gset Max-Cut instances (N=800--2000), depth-2--3 circuits reach near-optimal ratios ($r^* \approx 0.99$), surpassing direct $\mathrm{SA}(2)$ baselines. The framework resolves the local-consistency gap by giving it a concrete convex geometry and a minimal differentiable projection, establishing a clean polyhedral baseline. Extending beyond $\mathrm{SA}(2)$ naturally leads to spectrahedral geometries, where curvature encodes global coherence and genuine quantum structure becomes necessary.