Abstract:Continuous signal representations are naturally suited for inverse problems, such as magnetic resonance imaging (MRI) and computed tomography, because the measurements depend on an underlying physically continuous signal. While classical methods rely on predefined analytical bases like B-splines, implicit neural representations (INRs) have emerged as a powerful alternative that use coordinate-based networks to parameterize continuous functions with implicitly defined bases. Despite their empirical success, direct comparisons of their intrinsic representation capabilities with conventional models remain limited. This preliminary empirical study compares a positional-encoded INR with a cubic B-spline model for continuous function fitting from sparse random samples, isolating the representation capacity difference by only using coefficient-domain Tikhonov regularization. Results demonstrate that, under oracle hyperparameter selection, the INR achieves a lower normalized root-mean-squared error, yielding sharper edge transitions and fewer oscillatory artifacts than the oracle-tuned B-spline model. Additionally, we show that a practical bilevel optimization framework for INR hyperparameter selection based on measurement data split effectively approximates oracle performance. These findings empirically support the superior representation capacity of INRs for sparse data fitting.
Abstract:Deep Learning (DL) methods can reconstruct highly accelerated magnetic resonance imaging (MRI) scans, but they rely on application-specific large training datasets and often generalize poorly to out-of-distribution data. Self-supervised deep learning algorithms perform scan-specific reconstructions, but still require complicated hyperparameter tuning based on the acquisition and often offer limited acceleration. This work develops a bilevel-optimized implicit neural representation (INR) approach for scan-specific MRI reconstruction. The method automatically optimizes the hyperparameters for a given acquisition protocol, enabling a tailored reconstruction without training data. The proposed algorithm uses Gaussian process regression to optimize INR hyperparameters, accommodating various acquisitions. The INR includes a trainable positional encoder for high-dimensional feature embedding and a small multilayer perceptron for decoding. The bilevel optimization is computationally efficient, requiring only a few minutes per typical 2D Cartesian scan. On scanner hardware, the subsequent scan-specific reconstruction-using offline-optimized hyperparameters-is completed within seconds and achieves improved image quality compared to previous model-based and self-supervised learning methods.




Abstract:The realization of universal robots is an ultimate goal of researchers. However, a key hurdle in achieving this goal lies in the robots' ability to manipulate objects in their unstructured surrounding environments according to different tasks. The learning-based approach is considered an effective way to address generalization. The impressive performance of foundation models in the fields of computer vision and natural language suggests the potential of embedding foundation models into manipulation tasks as a viable path toward achieving general manipulation capability. However, we believe achieving general manipulation capability requires an overarching framework akin to auto driving. This framework should encompass multiple functional modules, with different foundation models assuming distinct roles in facilitating general manipulation capability. This survey focuses on the contributions of foundation models to robot learning for manipulation. We propose a comprehensive framework and detail how foundation models can address challenges in each module of the framework. What's more, we examine current approaches, outline challenges, suggest future research directions, and identify potential risks associated with integrating foundation models into this domain.