While Multi-Agent Debate (MAD) research has advanced, its efficacy in coordinating complex stakeholder interests such as travel planning remains largely unexplored. To bridge this gap, we propose MIND (Multi-agent Inference for Negotiation Dialogue), a framework designed to simulate realistic consensus-building among travelers with heterogeneous preferences. Grounded in the Theory of Mind (ToM), MIND introduces a Strategic Appraisal phase that infers opponent willingness (w) from linguistic nuances with 90.2% accuracy. Experimental results demonstrate that MIND outperforms traditional MAD frameworks, achieving a 20.5% improvement in High-w Hit and a 30.7% increase in Debate Hit-Rate, effectively prioritizing high-stakes constraints. Furthermore, qualitative evaluations via LLM-as-a-Judge confirm that MIND surpasses baselines in Rationality (68.8%) and Fluency (72.4%), securing an overall win rate of 68.3%. These findings validate that MIND effectively models human negotiation dynamics to derive persuasive consensus.
Fluid antenna systems (FAS) achieve spatial diversity by dynamically switching among $N$ densely packed ports, but the resulting spatially correlated Rayleigh channels render exact outage analysis intractable. Existing block-correlation models (BCM) impose structural approximations on the channel covariance matrix that can introduce optimistic performance bias. This paper proposes a principled Karhunen-Loève (KL) expansion framework that decomposes the $N$-dimensional correlated FAS channel into independent eigenmodes and performs a controlled rank-$K$ truncation, reducing the outage analysis to a $K$-dimensional integration with $K \ll N$. Closed-form outage expressions are derived for the rank-1 and rank-2 cases, and a general Gauss-Hermite quadrature formula is provided for arbitrary $K$. On the theoretical front, it is proved via Anderson's inequality that the KL approximation \emph{always} overestimates the outage probability, providing a conservative guarantee essential for secure system design. Leveraging the Slepian--Landau--Pollak concentration theorem, it is established that only $K^* = 2\lceil W \rceil + 1$ eigenmodes are needed regardless of $N$, where $W$ is the normalized aperture. It is further shown that the KL truncation achieves the Gaussian rate-distortion bound, certifying it as the information-theoretically optimal channel compression. Extensive numerical results confirm that (i) theoretical predictions match Monte Carlo simulations, (ii) the entropy fraction converges faster than the power fraction, (iii) the KL framework uniformly outperforms BCM in approximation accuracy while avoiding the optimistic bias inherent in block-diagonal models, and (iv) the effective degrees of freedom scale with the aperture rather than the number of ports.
One of the most common machine learning setups is logistic regression. In many classification models, including neural networks, the final prediction is obtained by applying a logistic link function to a linear score. In binary logistic regression, the feedback can be either soft labels, corresponding to the true conditional probability of the data (as in distillation), or sampled hard labels (taking values $\pm 1$). We point out a fundamental problem that arises even in a particularly favorable setting, where the goal is to learn a noise-free soft target of the form $σ(\mathbf{x}^{\top}\mathbf{w}^{\star})$. In the over-constrained case (i.e. the number of samples $n$ exceeds the input dimension $d$) with examples $(\mathbf{x}_i,σ(\mathbf{x}_i^{\top}\mathbf{w}^{\star}))$, it is sufficient to recover $\mathbf{w}^{\star}$ and hence achieve the Bayes risk. However, we prove that when the examples are labeled by hard labels $y_i$ sampled from the same conditional distribution $σ(\mathbf{x}_i^{\top}\mathbf{w}^{\star})$ and $\mathbf{w}^{\star}$ is $s$-sparse, then rotation-invariant algorithms are provably suboptimal: they incur an excess risk $Ω\!\left(\frac{d-1}{n}\right)$, while there are simple non-rotation invariant algorithms with excess risk $O(\frac{s\log d}{n})$. The simplest rotation invariant algorithm is gradient descent on the logistic loss (with early stopping). A simple non-rotation-invariant algorithm for sparse targets that achieves the above upper bounds uses gradient descent on the weights $u_i,v_i$, where now the linear weight $w_i$ is reparameterized as $u_iv_i$.
This paper develops a comprehensive analytical framework for the outage probability of fluid antenna system (FAS)-aided communications by modeling the antenna as a continuous aperture and approximating the Jakes (Bessel) spatial correlation with a Gaussian kernel $ρ_G(δ) = e^{-π^2δ^2}$. Three complementary analytical strategies are pursued. First, the Karhunen--Loève (KL) expansion under the Gaussian kernel is derived, yielding closed-form outage expressions for the rank-1 and rank-2 truncations and a Gauss--Hermite formula for arbitrary rank~$K$, with effective degrees of freedom $K_{\mathrm{eff}}^G \approx π\sqrt{2}\, W$. Second, rigorous two-sided outage bounds are established via Slepian's inequality and the Gaussian comparison theorem: by sandwiching the true correlation between equi-correlated models with $ρ_{\min}$ and $ρ_{\max}$, closed-form upper and lower bounds that avoid the optimistic bias of block-correlation models are obtained. Third, a continuous-aperture extreme value theory is developed using the Adler--Taylor expected Euler characteristic method and Piterbarg's theorem. The resulting outage expression $P_{\mathrm{out}} \approx 1 - e^{-x}(1 + π\sqrt{2}\, W\, x)$ depends only on the aperture~$W$ and threshold~$x$, is independent of the port count~$N$, and is identical for the Jakes and Gaussian models since both share the second spectral moment $λ_2 = 2π^2$. A Pickands-constant refinement for the deep-outage regime and a threshold-dependent effective diversity $N_{\mathrm{eff}} \approx 1 + π\sqrt{2}\, W\, x$ are further derived. Numerical results confirm that the Gaussian approximation incurs less than 10\% relative outage error for $W \leq 2$ and that the continuous-aperture formula converges with as few as $N \approx 10W$ ports.
Large language models (LLMs) have driven substantial advances in speech language models (SpeechLMs), yielding strong performance in automatic speech recognition (ASR) under high-resource conditions. However, existing benchmarks predominantly focus on high-resource languages, leaving the ASR behavior of SpeechLMs in low-resource languages insufficiently understood. This gap is critical, as practical ASR systems must reliably support low-resource languages and generalize across diverse language families, and it directly hinders the deployment of SpeechLM-based ASR in real-world multilingual scenarios. As a result, it is essential to evaluate SpeechLMs on low-resource languages to ensure their generalizability across different language families. To address this problem, we propose \textbf{LoASR-Bench}, a comprehensive benchmark designed to evaluate \textbf{lo}w-resource \textbf{a}utomatic \textbf{s}peech \textbf{r}ecognition (\textbf{ASR}) of the latest SpeechLMs across diverse language families. LoASR-Bench comprises 25 languages from 9 language families, featuring both Latin and non-Latin scripts, enabling cross-linguistic and cross-script assessment of ASR performance of current SpeechLMs. Experimental results highlight the limitations of the latest SpeechLMs in handling real-world low-resource languages.
We develop horizon-aware anytime-valid tests and confidence sequences for bounded means under a strict deadline $N$. Using the betting/e-process framework, we cast horizon-aware betting as a finite-horizon optimal control problem with state space $(t, \log W_t)$, where $t$ is the time and $W_t$ is the test martingale value. We first show that in certain interior regions of the state space, policies that deviate significantly from Kelly betting are provably suboptimal, while Kelly betting reaches the threshold with high probability. We then identify sufficient conditions showing that outside this region, more aggressive betting than Kelly can be better if the bettor is behind schedule, and less aggressive can be better if the bettor is ahead. Taken together these results suggest a simple phase diagram in the $(t, \log W_t)$ plane, delineating regions where Kelly, fractional Kelly, and aggressive betting may be preferable. Guided by this phase diagram, we introduce a Deep Reinforcement Learning approach based on a universal Deep Q-Network (DQN) agent that learns a single policy from synthetic experience and maps simple statistics of past observations to bets across horizons and null values. In limited-horizon experiments, the learned DQN policy yields state-of-the-art results.
We study online resource allocation among N interacting modules over T rounds. Unlike standard online optimization, costs are endogenous: they depend on the full allocation vector through an interaction matrix W encoding pairwise cooperation and competition. We analyze three paradigms: (I) uniform allocation (cost-ignorant), (II) gated allocation (cost-estimating), and (III) competitive allocation via multiplicative weights update with interaction feedback (cost-revealing). Our main results establish a strict separation under adversarial sequences with bounded variation: uniform incurs Omega(T) regret, gated achieves O(T^{2/3}), and competitive achieves O(sqrt(T log N)). The performance gap stems from competitive allocation's ability to exploit endogenous cost information revealed through interactions. We further show that W's topology governs a computation-regret tradeoff. Full interaction (|E|=O(N^2)) yields the tightest bound but highest per-step cost, while sparse topologies (|E|=O(N)) increase regret by at most O(sqrt(log N)) while reducing per-step cost from O(N^2) to O(N). Ring-structured topologies with both cooperative and competitive links - of which the five-element Wuxing topology is canonical - minimize the computation x regret product. These results provide the first formal regret-theoretic justification for decentralized competitive allocation in modular architectures and establish cost endogeneity as a fundamental challenge distinct from partial observability. Keywords: online learning, regret bounds, resource allocation, endogenous costs, interaction topology, multiplicative weights, modular systems, Wuxing topology
In the present paper we study the performance of linear denoisers for noisy data of the form $\mathbf{x} + \mathbf{z}$, where $\mathbf{x} \in \mathbb{R}^d$ is the desired data with zero mean and unknown covariance $\mathbfΣ$, and $\mathbf{z} \sim \mathcal{N}(0, \mathbfΣ_{\mathbf{z}})$ is additive noise. Since the covariance $\mathbfΣ$ is not known, the standard Wiener filter cannot be employed for denoising. Instead we assume we are given samples $\mathbf{x}_1,\dots,\mathbf{x}_n \in \mathbb{R}^d$ from the true distribution. A standard approach would then be to estimate $\mathbfΣ$ from the samples and use it to construct an ``empirical" Wiener filter. However, in this paper, motivated by the denoising step in diffusion models, we take a different approach whereby we train a linear denoiser $\mathbf{W}$ from the data itself. In particular, we synthetically construct noisy samples $\hat{\mathbf{x}}_i$ of the data by injecting the samples with Gaussian noise with covariance $\mathbfΣ_1 \neq \mathbfΣ_{\mathbf{z}}$ and find the best $\mathbf{W}$ that approximates $\mathbf{W}\hat{\mathbf{x}}_i \approx \mathbf{x}_i$ in a least-squares sense. In the proportional regime $\frac{n}{d} \rightarrow κ> 1$ we use the {\it Convex Gaussian Min-Max Theorem (CGMT)} to analytically find the closed form expression for the generalization error of the denoiser obtained from this process. Using this expression one can optimize over $\mathbfΣ_1$ to find the best possible denoiser. Our numerical simulations show that our denoiser outperforms the ``empirical" Wiener filter in many scenarios and approaches the optimal Wiener filter as $κ\rightarrow\infty$.
We study the exact decision problem for feedback capacity of finite-state channels (FSCs). Given an encoding $e$ of a binary-input binary-output rational unifilar FSC with specified rational initial distribution, and a rational threshold $q$, we ask whether the feedback capacity satisfies $C_{fb}(W_e, π_{1,e}) \ge q$. We prove that this exact threshold problem is undecidable, even when restricted to a severely constrained class of rational unifilar FSCs with bounded state space. The reduction is effective and preserves rationality of all channel parameters. As a structural consequence, the exact threshold predicate does not lie in the existential theory of the reals ($\exists\mathbb{R}$), and therefore cannot admit a universal reduction to finite systems of polynomial equalities and inequalities over the real numbers. In particular, there is no algorithm deciding all instances of the exact feedback-capacity threshold problem within this class. These results do not preclude approximation schemes or solvability for special subclasses; rather, they establish a fundamental limitation for exact feedback-capacity reasoning in general finite-state settings. At the metatheoretic level, the undecidability result entails corresponding Gödel-Tarski-Löb incompleteness phenomena for sufficiently expressive formal theories capable of representing the threshold predicate.
Large language models (LLMs) contain billions of parameters, yet many exact values are not essential. We show that what matters most is the relative rank of weights-whether one connection is stronger or weaker than another-rather than precise magnitudes. To reduce the number of unique weight values, we apply weight clustering to pretrained models, replacing every weight matrix with K shared values from K-means. For Llama 3.1-8B-Instruct and SmolLM2-135M, reducing each matrix to only 16-64 distinct values preserves strong accuracy without retraining, providing a simple, training-free method to compress LLMs on disk. Optionally fine-tuning only the cluster means (centroids) recovers 30-40 percent of the remaining accuracy gap at minimal cost. We then systematically randomize cluster means while keeping assignments fixed. Scrambling the relative ranks of the clusters degrades quality sharply-perplexity can increase by orders of magnitude-even when global statistics such as mean and variance are preserved. In contrast, rank-preserving randomizations cause almost no loss at mid and late layers. On the other hand, when many layers are perturbed simultaneously, progressive layer-by-layer replacement reveals that scale drift-not rank distortion-is the dominant collapse mechanism; however, an affine correction w' = aw + b with a > 0 (which preserves both rank order and overall weight distribution) can substantially delay this drift. This rank-based perspective offers a new lens on model compression and robustness.