Jack
Abstract:Tool-integrated reasoning (TIR) offers a direct way to extend thinking models beyond the limits of text-only reasoning. Paradoxically, we observe that tool-enabled evaluation can degrade reasoning performance even when the strong thinking models make almost no actual tool calls. In this paper, we investigate how to inject natural tool-use behavior into a strong thinking model without sacrificing its no-tool reasoning ability, and present a comprehensive TIR recipe. We highlight that (i) the effectiveness of TIR supervised fine-tuning (SFT) hinges on the learnability of teacher trajectories, which should prioritize problems inherently suited for tool-augmented solutions; (ii) controlling the proportion of tool-use trajectories could mitigate the catastrophic forgetting of text-only reasoning capacity; (iii) optimizing for pass@k and response length instead of training loss could maximize TIR SFT gains while preserving headroom for reinforcement learning (RL) exploration; (iv) a stable RL with verifiable rewards (RLVR) stage, built upon suitable SFT initialization and explicit safeguards against mode collapse, provides a simple yet remarkably effective solution. When applied to Qwen3 thinking models at 4B and 30B scales, our recipe yields models that achieve state-of-the-art performance in a wide range of benchmarks among open-source models, such as 96.7% and 99.2% on AIME 2025 for 4B and 30B, respectively.
Abstract:Open-Set Object Detection (OSOD) is crucial for autonomous driving, where perception systems must recognize and localize both known and previously unseen objects in complex, dynamic environments. While recent approaches deliver promising results, they often require retraining the detector extensively to learn objectness, which describes the likelihood that a bounding box tightly encloses a valid object, regardless of whether its category was learned during training. Deviating from existing work, we hypothesize that standard off-the-shelf detectors may already contain helpful cues for objectness, owing to their training on numerous and diverse known categories. Building on this idea, we propose NAN-SPOT, a training-light framework that does not require to retrain the base object detector and estimates objectness by leveraging a hidden layer metric called Negative-Aware Norm (NAN), requiring only minutes of training on just hundreds of images. To support comprehensive evaluation, we introduce COCO-Open, an expanded version of the existing COCO-Mixed dataset, increasing unknown object annotations from 433 to 1853, making it the most exhaustively labeled dataset for OSOD to the best of our knowledge. Experimental results demonstrate that NAN-SPOT achieves even better performance on unknown object detection than methods requiring heavy training, without compromising performance on known objects. This efficiency and robustness make NAN-SPOT a promising step towards open-world perception in autonomous driving.
Abstract:In large antenna arrays, hardware power consumption becomes a dominant design constraint, making energy efficiency (EE) a first-class objective alongside spectral efficiency (SE). Microwave linear analog computer (MiLAC)-aided beamforming, whose front end is a passive reciprocal stream-to-antenna network, addresses this tension by reducing the active radio-frequency chain count to the stream number, at a moderate SE cost. Despite this promise, no EE optimization framework has been established for MiLAC-aided beamforming that accounts for digital-to-analog converter quantization noise and post-quantized transmit power. We fill this gap for downlink multiuser multiple-input single-output (MU-MISO) systems by formulating quantization-aware EE maximization over the MiLAC-feasible beamformer and characterizing the resulting SE-EE tradeoff. Three contributions follow. First, we prove a row-space optimality property of the effective MiLAC-aided beamformer, yielding an equivalent reduced-dimension reformulation whose complexity scales with the stream number rather than the antenna number. Second, we develop a low-complexity Dinkelbach-weighted minimum mean-square error algorithm aided by projected gradient descent that is guaranteed to converge to a stationary point. Third, we cast the SE-EE tradeoff as a multi-objective problem and trace its Pareto boundary via a weighted-sum method that combines an alternative reduced-dimension coordinate with auxiliary-variable successive convex approximation, yielding convex per-iteration subproblems with guaranteed convergence. Numerical results on a DeepMIMO v4 deployment show MiLAC-aided beamforming substantially improves EE over digital and hybrid benchmarks at a moderate SE cost and significantly expands the achievable SE-EE operating region.
Abstract:Integrated Sensing and Communication (ISAC) systems require efficient beamforming architectures to jointly support communication and sensing functionalities. To reduce hardware overhead, Hybrid Beamforming (HBF) has been widely studied and shown to achieve performance close to fully digital beamforming under practical hardware constraints. As a promising evolution, Reconfigurable Antenna (RA) technologies have recently emerged to further enhance beamforming Degrees of Freedom (DoFs) by dynamically reconfiguring antenna Electromagnetic(EM) characteristics, yet their integration into ISAC systems remains largely unexplored. In this paper, we investigate an RA-assisted ISAC system and develop a decoupled Triple-Hybrid Beamforming (Tri-HBF) framework that alternatively optimizes digital, analog, and EM beamformers to maximize the communication rate and sensing Signal-to-Clutter-plus-NoiseRatio (SCNR). For both Single-user Single-target (SUST) and Multiple-user Multiple-target (MUMT) scenarios, we first transform the original fractional objectives into fraction-free ones via methods tailored to their respective structures. The resulting problems are then solved via alternating optimization over different variable blocks. Closed-form updates are derived for all variables except the EM beamforming subproblem in the MUMT scenario. To further reduce the complexity introduced by Semidefinite Relaxation (SDR) in EM beamforming, we propose a low-complexity iterative approach across antennas with closed-form updates. Simulation results demonstrate that the proposed scheme significantly outperforms benchmark designs with conventional omnidirectional and directional antennas, achievingalmost 100% improvement in spectrum efficiency and 62.5% reduction in antenna overhead, thereby unveiling the
Abstract:Reconfigurable antennas (RAs) utilize the electromagnetic (EM) domain to provide dynamic control over antenna radiation patterns, which offers an effective way to enhance power efficiency in wireless links. Unlike conventional arrays with fixed element patterns, RAs enable on-demand beam-pattern synthesis by directly controlling each antenna's EM characteristics. While existing research on RAs has primarily focused on improving spectral efficiency, this paper explores their application for downlink localization. Moreover, the majority of existing works focus on far-field scenarios with little attention on near-field (NF). Motivated by these gaps, we consider a synthesis model in which each antenna generates desired beampatterns from a finite set of EM basis functions. We then formulate a joint optimization problem for the baseband (BB) and EM precoders with the objective of minimizing the user equipment (UE) position error bound (PEB) in NF conditions. Our analytical derivations and extensive simulation results demonstrate that the proposed hybrid precoder design for RAs significantly improves UE positioning accuracy compared to traditional non-reconfigurable arrays.
Abstract:Unsupervised reinforcement learning with verifiable rewards (URLVR) offers a pathway to scale LLM training beyond the supervision bottleneck by deriving rewards without ground truth labels. Recent works leverage model intrinsic signals, showing promising early gains, yet their potential and limitations remain unclear. In this work, we revisit URLVR and provide a comprehensive analysis spanning taxonomy, theory and extensive experiments. We first classify URLVR methods into intrinsic versus external based on reward sources, then establish a unified theoretical framework revealing that all intrinsic methods converge toward sharpening the model's initial distribution This sharpening mechanism succeeds when initial confidence aligns with correctness but fails catastrophically when misaligned. Through systematic experiments, we show intrinsic rewards consistently follow a rise-then-fall pattern across methods, with collapse timing determined by model prior rather than engineering choices. Despite these scaling limits, we find intrinsic rewards remain valuable in test-time training on small datasets, and propose Model Collapse Step to measure model prior, serving as a practical indicator for RL trainability. Finally, we explore external reward methods that ground verification in computational asymmetries, showing preliminary evidence they may escape the confidence-correctness ceiling. Our findings chart boundaries for intrinsic URLVR while motivating paths toward scalable alternatives.
Abstract:Recent advances in generative AI have significantly enhanced the realism of multimodal media manipulation, thereby posing substantial challenges to manipulation detection. Existing manipulation detection and grounding approaches predominantly focus on manipulation type classification under result-oriented supervision, which not only lacks interpretability but also tends to overfit superficial artifacts. In this paper, we argue that generalizable detection requires incorporating explicit forensic reasoning, rather than merely classifying a limited set of manipulation types, which fails to generalize to unseen manipulation patterns. To this end, we propose REFORM, a reasoning-driven framework that shifts learning from outcome fitting to process modeling. REFORM adopts a three-stage curriculum that first induces forensic rationales, then aligns reasoning with final judgments, and finally refines logical consistency via reinforcement learning. To support this paradigm, we introduce ROM, a large-scale dataset with rich reasoning annotations. Extensive experiments show that REFORM establishes new state-of-the-art performance with superior generalization, achieving 81.52% ACC on ROM, 76.65% ACC on DGM4, and 74.9 F1 on MMFakeBench.
Abstract:The transition from symbolic manipulation to science-grade reasoning represents a pivotal frontier for Large Language Models (LLMs), with physics serving as the critical test anchor for binding abstract logic to physical reality. Physics demands that a model maintain physical consistency with the laws governing the universe, a task that fundamentally requires multimodal perception to ground abstract logic in reality. At the Olympiad level, diagrams are often constitutive rather than illustrative, containing essential constraints, such as boundary conditions and spatial symmetries, that are absent from the text. To bridge this visual-logical gap, we introduce P1-VL, a family of open-source vision-language models engineered for advanced scientific reasoning. Our method harmonizes Curriculum Reinforcement Learning, which employs progressive difficulty expansion to stabilize post-training, with Agentic Augmentation, enabling iterative self-verification at inference. Evaluated on HiPhO, a rigorous benchmark of 13 exams from 2024-2025, our flagship P1-VL-235B-A22B becomes the first open-source Vision-Language Model (VLM) to secure 12 gold medals and achieves the state-of-the-art performance in the open-source models. Our agent-augmented system achieves the No.2 overall rank globally, trailing only Gemini-3-Pro. Beyond physics, P1-VL demonstrates remarkable scientific reasoning capacity and generalizability, establishing significant leads over base models in STEM benchmarks. By open-sourcing P1-VL, we provide a foundational step toward general-purpose physical intelligence to better align visual perceptions with abstract physical laws for machine scientific discovery.
Abstract:This paper presents an end-to-end deep learning framework for electromagnetically reconfigurable antenna (ERA)-aided user localization with active sensing, where ERAs provide additional electromagnetic reconfigurability to diversify the received measurements and enhance localization informativeness. To balance sensing flexibility and overhead, we adopt a two-timescale design: the digital combiner is updated at each stage, while the ERA patterns are reconfigured at each substage via a spherical-harmonic representation. The proposed mechanism integrates attention-based feature extraction and LSTM-based temporal learning, enabling the system to learn an optimized sensing strategy and progressively refine the UE position estimate from sequential observations. Simulation results show that the proposed approach consistently outperforms conventional digital beamforming-only and single-stage sensing baselines in terms of localization accuracy. These results highlight the effectiveness of ERA-enabled active sensing for user localization in future wireless systems.
Abstract:Large Language Model (LLM)-powered Automatic Speech Recognition (ASR) systems achieve strong performance with limited resources by linking a frozen speech encoder to a pretrained LLM via a lightweight connector. Prior work trains a separate connector per language, overlooking linguistic relatedness. We propose an efficient and novel connector-sharing strategy based on linguistic family membership, enabling one connector per family, and empirically validate its effectiveness across two multilingual LLMs and two real-world corpora spanning curated and crowd-sourced speech. Our results show that family-based connectors reduce parameter count while improving generalization across domains, offering a practical and scalable strategy for multilingual ASR deployment.