Aerial vehicles equipped with manipulators can serve contact-based industrial applications, where fundamental tasks like drilling and grinding often necessitate aerial platforms to handle heavy tools. Industrial environments often involve non-horizontal surfaces. Existing aerial manipulation platforms based on multirotors typically feature a fixed CoM (Center of Mass) within the rotor-defined area, leading to a considerable moment arm between the EE (End-Effector) tip and the CoM for operations on such surfaces. Carrying heavy tools at the EE tip of the manipulator with an extended moment arm can lead to system instability and potential damage to the servo actuators used in the manipulator. To tackle this issue, we present a novel aerial vehicle tailored for handling heavy tools on non-horizontal surfaces. In this work, we provide the platform's system design, modeling, and control strategies. This platform can carry heavy manipulators within the rotor-defined area during free flight. During interactions, the manipulator can shift towards the work surface outside the rotor-defined area, resulting in a displaced CoM location with a significantly shorter moment arm. Furthermore, we propose a method for automatically determining the manipulator's position to reach the maximum CoM displacement towards the work surface. Our proposed concepts are validated through simulations that closely capture the developed physical prototype of the platform.
With the mobile communication system evolving into 6th-generation (6G), the Internet of Everything (IoE) is becoming reality, which connects human, big data and intelligent machines to support the intelligent decision making, reconfiguring the traditional industries and human life. The applications of IoE require not only pure communication capability, but also high-accuracy and large-scale sensing capability. With the emerging integrated sensing and communication (ISAC) technique, exploiting the mobile communication system with multi-domain resources, multiple network elements, and large-scale infrastructures to realize cooperative sensing is a crucial approach to satisfy the requirements of high-accuracy and large-scale sensing in IoE. In this article, the deep cooperation in ISAC system including three perspectives is investigated. In the microscopic perspective, namely, within a single node, the cooperation at the resource-level is performed to improve sensing accuracy by fusing the sensing information carried in the time-frequency-space-code multi-domain resources. In the mesoscopic perspective, the sensing accuracy could be improved through the cooperation of multiple nodes including Base Station (BS), User Equipment (UE), and Reconfigurable Intelligence Surface (RIS), etc. In the macroscopic perspective, the massive number of infrastructures from the same operator or different operators could perform cooperative sensing to extend the sensing coverage and improve the sensing continuity. This article may provide a deep and comprehensive view on the cooperative sensing in ISAC system to enhance the performance of sensing, supporting the applications of IoE.
Event cameras can record scene dynamics with high temporal resolution, providing rich scene details for monocular depth estimation (MDE) even at low-level illumination. Therefore, existing complementary learning approaches for MDE fuse intensity information from images and scene details from event data for better scene understanding. However, most methods directly fuse two modalities at pixel level, ignoring that the attractive complementarity mainly impacts high-level patterns that only occupy a few pixels. For example, event data is likely to complement contours of scene objects. In this paper, we discretize the scene into a set of high-level patterns to explore the complementarity and propose a Pattern-based Complementary learning architecture for monocular Depth estimation (PCDepth). Concretely, PCDepth comprises two primary components: a complementary visual representation learning module for discretizing the scene into high-level patterns and integrating complementary patterns across modalities and a refined depth estimator aimed at scene reconstruction and depth prediction while maintaining an efficiency-accuracy balance. Through pattern-based complementary learning, PCDepth fully exploits two modalities and achieves more accurate predictions than existing methods, especially in challenging nighttime scenarios. Extensive experiments on MVSEC and DSEC datasets verify the effectiveness and superiority of our PCDepth. Remarkably, compared with state-of-the-art, PCDepth achieves a 37.9% improvement in accuracy in MVSEC nighttime scenarios.
In recent years, image editing has advanced remarkably. With increased human control, it is now possible to edit an image in a plethora of ways; from specifying in text what we want to change, to straight up dragging the contents of the image in an interactive point-based manner. However, most of the focus has remained on editing single images at a time. Whether and how we can simultaneously edit large batches of images has remained understudied. With the goal of minimizing human supervision in the editing process, this paper presents a novel method for interactive batch image editing using StyleGAN as the medium. Given an edit specified by users in an example image (e.g., make the face frontal), our method can automatically transfer that edit to other test images, so that regardless of their initial state (pose), they all arrive at the same final state (e.g., all facing front). Extensive experiments demonstrate that edits performed using our method have similar visual quality to existing single-image-editing methods, while having more visual consistency and saving significant time and human effort.
Pneumatic soft robots are typically fabricated by molding, a manual fabrication process that requires skilled labor. Additive manufacturing has the potential to break this limitation and speed up the fabrication process but struggles with consistently producing high-quality prints. We propose a low-cost approach to improve the print quality of desktop fused deposition modeling by adding a webcam to the printer to monitor the printing process and detect and correct defects such as holes or gaps. We demonstrate that our approach improves the air-tightness of printed pneumatic actuators without fine-tuning printing parameters. Our approach presents a new option for robustly fabricating airtight, soft robotic actuators.
The current fabrication and assembly of fluidic circuits for soft robots relies heavily on manual processes; as the complexity of fluidic circuits increases, manual assembly becomes increasingly arduous, error-prone, and timeconsuming. We introduce a software tool that generates printable fluidic networks automatically. We provide a library of fluidic logic elements that are easily 3D printed from thermoplastic polyurethanes using Fused Deposition Modeling only. Our software tool and component library allow the development of arbitrary soft digital circuits. We demonstrate a variable frequency ring oscillator and a full adder. The simplicity of our approach using FDM printers only, democratizes fluidic circuit implementation beyond specialized laboratories. Our software is available on GitHub (https://github.com/roboticmaterialsgroup/FluidLogic).
While existing large vision-language multimodal models focus on whole image understanding, there is a prominent gap in achieving region-specific comprehension. Current approaches that use textual coordinates or spatial encodings often fail to provide a user-friendly interface for visual prompting. To address this challenge, we introduce a novel multimodal model capable of decoding arbitrary visual prompts. This allows users to intuitively mark images and interact with the model using natural cues like a "red bounding box" or "pointed arrow". Our simple design directly overlays visual markers onto the RGB image, eliminating the need for complex region encodings, yet achieves state-of-the-art performance on region-understanding tasks like Visual7W, PointQA, and Visual Commonsense Reasoning benchmark. Furthermore, we present ViP-Bench, a comprehensive benchmark to assess the capability of models in understanding visual prompts across multiple dimensions, enabling future research in this domain. Code, data, and model are publicly available.
LLaVA-Plus is a general-purpose multimodal assistant that expands the capabilities of large multimodal models. It maintains a skill repository of pre-trained vision and vision-language models and can activate relevant tools based on users' inputs to fulfill real-world tasks. LLaVA-Plus is trained on multimodal instruction-following data to acquire the ability to use tools, covering visual understanding, generation, external knowledge retrieval, and compositions. Empirical results show that LLaVA-Plus outperforms LLaVA in existing capabilities and exhibits new ones. It is distinct in that the image query is directly grounded and actively engaged throughout the entire human-AI interaction sessions, significantly improving tool use performance and enabling new scenarios.
Large multimodal models (LMM) have recently shown encouraging progress with visual instruction tuning. In this note, we show that the fully-connected vision-language cross-modal connector in LLaVA is surprisingly powerful and data-efficient. With simple modifications to LLaVA, namely, using CLIP-ViT-L-336px with an MLP projection and adding academic-task-oriented VQA data with simple response formatting prompts, we establish stronger baselines that achieve state-of-the-art across 11 benchmarks. Our final 13B checkpoint uses merely 1.2M publicly available data, and finishes full training in ~1 day on a single 8-A100 node. We hope this can make state-of-the-art LMM research more accessible. Code and model will be publicly available.
As a promising key technology of 6th-Generation (6G) mobile communication systems, integrated sensing and communication (ISAC) technology aims to make full use of spectrum resources to enable the functional integration of communication and sensing. The ISAC-enabled mobile communication systems regularly operate in non-continuous spectrum bands due to crowded licensed frequency bands. However, the conventional sensing algorithms over non-continuous spectrum bands have disadvantages such as reduced peak-to-sidelobe ratio (PSR) and degraded anti-noise performance. Facing this challenge, we propose a high-precision ISAC signal processing algorithm based on compressed sensing (CS) in this paper. By integrating the resource block group (RBG) configuration information in 5th-Generation new radio (5G NR) and channel information matrices, we can dynamically and accurately obtain power estimation spectra. Moreover, we employ the fast iterative shrinkage-thresholding algorithm (FISTA) to address the reconstruction problem and utilize K-fold cross validation (KCV) to obtain optimal parameters. Simulation results show that the proposed algorithm has lower sidelobes or even zero sidelobes and high anti-noise performance compared with conventional sensing algorithms.