Abstract:Subcode-ensemble decoders improve iterative decoding by running multiple decoders in parallel over carefully chosen subcodes, increasing the likelihood that at least one decoder avoids the dominant trapping structures. Achieving strong diversity gains, however, requires constructing many subcodes that satisfy a linear covering property-yet existing approaches lack a systematic way to scale the ensemble size while preserving this property. This paper introduces hierarchical subcode ensemble decoding (HSCED), a new ensemble decoding framework that expands the number of constituent decoders while still guaranteeing linear covering. The key idea is to recursively generate subcode parity constraints in a hierarchical structure so that coverage is maintained at every level, enabling large ensembles with controlled complexity. To demonstrate its effectiveness, we apply HSCED to belief propagation (BP) decoding of polar codes, where dense parity-check matrices induce severe stopping-set effects that limit conventional BP. Simulations confirm that HSCED delivers significant block-error-rate improvements over standard BP and conventional subcode-ensemble decoding under the same decoding-latency constraint.
Abstract:Ultra-reliable low-latency communications (URLLC) operate with short packets, where finite-blocklength effects make near-maximum-likelihood (near-ML) decoding desirable but often too costly. This paper proposes a two-stage near-ML decoding framework that applies to any linear block code. In the first stage, we run a low-complexity decoder to produce a candidate codeword and a cyclic redundancy check. When this stage succeeds, we terminate immediately. When it fails, we invoke a second-stage decoder, termed multipoint code-weight sphere decoding (MP-WSD). The central idea behind {MP-WSD} is to concentrate the ML search where it matters. We pre-compute a set of low-weight codewords and use them to generate structured local perturbations of the current estimate. Starting from the first-stage output, MP-WSD iteratively explores a small Euclidean sphere of candidate codewords formed by adding selected low-weight codewords, tightening the search region as better candidates are found. This design keeps the average complexity low: at high signal-to-noise ratio, the first stage succeeds with high probability and the second stage is rarely activated; when it is activated, the search remains localized. Simulation results show that the proposed decoder attains near-ML performance for short-blocklength, low-rate codes while maintaining low decoding latency.
Abstract:Ultra-reliable low-latency communications (URLLC) demand high-performance error-correcting codes and decoders in the finite blocklength regime. This letter introduces a novel two-stage near-maximum likelihood (near-ML) decoding framework applicable to any linear block code. Our approach first employs a low-complexity initial decoder. If this initial stage fails a cyclic redundancy check, it triggers a second stage: the proposed code-weight sphere decoding (WSD). WSD iteratively refines the codeword estimate by exploring a localized sphere of candidates constructed from pre-computed low-weight codewords. This strategy adaptively minimizes computational overhead at high signal-to-noise ratios while achieving near-ML performance, especially for low-rate codes. Extensive simulations demonstrate that our two-stage decoder provides an excellent trade-off between decoding reliability and complexity, establishing it as a promising solution for next-generation URLLC systems.