Abstract:Adaptive impedance matching between antennas and radio frequency front-end modules is critical for maximizing power transmission efficiency in mobile communication systems. Conventional numerical and analytical methods struggle with a trade-off between accuracy and efficiency, while deep neural network (DNN)-based supervised learning approaches rely heavily on large labeled datasets and lack flexibility for dynamic environments. To address these limitations, this paper proposes a deep reinforcement learning (DRL)-based approach for adaptive impedance matching. First, we model the impedance tuning problem as an optimal control problem, proving the feasibility of solving the optimal control law via reinforcement learning. Then, we design a tailored DRL framework for impedance tuning, which employs a compact state representation that integrates key frequency characteristics and matching quality metrics. Additionally, this framework incorporates a piecewise reward function that accounts for both matching accuracy and tuning speed. Furthermore, a test-phase exploration mechanism is introduced to enhance tuning stability, which effectively reduces local optimal trapping and high-frequency tuning variance. Experimental results demonstrate that the proposed method achieves superior performance in terms of tuning accuracy, efficiency, and stability compared with conventional heuristic and gradient-based methods, making it promising for practical impedance tuning systems.
Abstract:Multimodal retrieval systems typically employ Vision Language Models (VLMs) that encode images and text independently into vectors within a shared embedding space. Despite incorporating text encoders, VLMs consistently underperform specialized text models on text-only retrieval tasks. Moreover, introducing additional text encoders increases storage, inference overhead, and exacerbates retrieval inefficiencies, especially in multilingual settings. To address these limitations, we propose a multi-task learning framework that unifies the feature representation across images, long and short texts, and intent-rich queries. To our knowledge, this is the first work to jointly optimize multilingual image retrieval, text retrieval, and natural language understanding (NLU) tasks within a single framework. Our approach integrates image and text retrieval with a shared text encoder that is enhanced by NLU features for intent understanding and retrieval accuracy.
Abstract:Instruction-based image editing has garnered significant attention due to its direct interaction with users. However, real-world user instructions are immensely diverse, and existing methods often fail to generalize effectively to instructions outside their training domain, limiting their practical application. To address this, we propose Lego-Edit, which leverages the generalization capability of Multi-modal Large Language Model (MLLM) to organize a suite of model-level editing tools to tackle this challenge. Lego-Edit incorporates two key designs: (1) a model-level toolkit comprising diverse models efficiently trained on limited data and several image manipulation functions, enabling fine-grained composition of editing actions by the MLLM; and (2) a three-stage progressive reinforcement learning approach that uses feedback on unannotated, open-domain instructions to train the MLLM, equipping it with generalized reasoning capabilities for handling real-world instructions. Experiments demonstrate that Lego-Edit achieves state-of-the-art performance on GEdit-Bench and ImgBench. It exhibits robust reasoning capabilities for open-domain instructions and can utilize newly introduced editing tools without additional fine-tuning. Code is available: https://github.com/xiaomi-research/lego-edit.