Picture for Tianrun Chen

Tianrun Chen

xLSTM-UNet can be an Effective 2D & 3D Medical Image Segmentation Backbone with Vision-LSTM (ViL) better than its Mamba Counterpart

Add code
Jul 02, 2024
Figure 1 for xLSTM-UNet can be an Effective 2D & 3D Medical Image Segmentation Backbone with Vision-LSTM (ViL) better than its Mamba Counterpart
Figure 2 for xLSTM-UNet can be an Effective 2D & 3D Medical Image Segmentation Backbone with Vision-LSTM (ViL) better than its Mamba Counterpart
Figure 3 for xLSTM-UNet can be an Effective 2D & 3D Medical Image Segmentation Backbone with Vision-LSTM (ViL) better than its Mamba Counterpart
Figure 4 for xLSTM-UNet can be an Effective 2D & 3D Medical Image Segmentation Backbone with Vision-LSTM (ViL) better than its Mamba Counterpart
Viaarxiv icon

Reasoning3D -- Grounding and Reasoning in 3D: Fine-Grained Zero-Shot Open-Vocabulary 3D Reasoning Part Segmentation via Large Vision-Language Models

Add code
May 29, 2024
Figure 1 for Reasoning3D -- Grounding and Reasoning in 3D: Fine-Grained Zero-Shot Open-Vocabulary 3D Reasoning Part Segmentation via Large Vision-Language Models
Figure 2 for Reasoning3D -- Grounding and Reasoning in 3D: Fine-Grained Zero-Shot Open-Vocabulary 3D Reasoning Part Segmentation via Large Vision-Language Models
Figure 3 for Reasoning3D -- Grounding and Reasoning in 3D: Fine-Grained Zero-Shot Open-Vocabulary 3D Reasoning Part Segmentation via Large Vision-Language Models
Figure 4 for Reasoning3D -- Grounding and Reasoning in 3D: Fine-Grained Zero-Shot Open-Vocabulary 3D Reasoning Part Segmentation via Large Vision-Language Models
Viaarxiv icon

MaPa: Text-driven Photorealistic Material Painting for 3D Shapes

Add code
Apr 26, 2024
Figure 1 for MaPa: Text-driven Photorealistic Material Painting for 3D Shapes
Figure 2 for MaPa: Text-driven Photorealistic Material Painting for 3D Shapes
Figure 3 for MaPa: Text-driven Photorealistic Material Painting for 3D Shapes
Figure 4 for MaPa: Text-driven Photorealistic Material Painting for 3D Shapes
Viaarxiv icon

IBD: Alleviating Hallucinations in Large Vision-Language Models via Image-Biased Decoding

Add code
Feb 28, 2024
Figure 1 for IBD: Alleviating Hallucinations in Large Vision-Language Models via Image-Biased Decoding
Figure 2 for IBD: Alleviating Hallucinations in Large Vision-Language Models via Image-Biased Decoding
Figure 3 for IBD: Alleviating Hallucinations in Large Vision-Language Models via Image-Biased Decoding
Figure 4 for IBD: Alleviating Hallucinations in Large Vision-Language Models via Image-Biased Decoding
Viaarxiv icon

RESMatch: Referring Expression Segmentation in a Semi-Supervised Manner

Add code
Feb 11, 2024
Viaarxiv icon

LLaFS: When Large-Language Models Meet Few-Shot Segmentation

Add code
Dec 05, 2023
Figure 1 for LLaFS: When Large-Language Models Meet Few-Shot Segmentation
Figure 2 for LLaFS: When Large-Language Models Meet Few-Shot Segmentation
Figure 3 for LLaFS: When Large-Language Models Meet Few-Shot Segmentation
Figure 4 for LLaFS: When Large-Language Models Meet Few-Shot Segmentation
Viaarxiv icon

Deep3DSketch+: Rapid 3D Modeling from Single Free-hand Sketches

Add code
Sep 22, 2023
Viaarxiv icon

PanopticNeRF-360: Panoramic 3D-to-2D Label Transfer in Urban Scenes

Add code
Sep 19, 2023
Figure 1 for PanopticNeRF-360: Panoramic 3D-to-2D Label Transfer in Urban Scenes
Figure 2 for PanopticNeRF-360: Panoramic 3D-to-2D Label Transfer in Urban Scenes
Figure 3 for PanopticNeRF-360: Panoramic 3D-to-2D Label Transfer in Urban Scenes
Figure 4 for PanopticNeRF-360: Panoramic 3D-to-2D Label Transfer in Urban Scenes
Viaarxiv icon

Learning Gabor Texture Features for Fine-Grained Recognition

Add code
Aug 10, 2023
Figure 1 for Learning Gabor Texture Features for Fine-Grained Recognition
Figure 2 for Learning Gabor Texture Features for Fine-Grained Recognition
Figure 3 for Learning Gabor Texture Features for Fine-Grained Recognition
Figure 4 for Learning Gabor Texture Features for Fine-Grained Recognition
Viaarxiv icon

Dyn-E: Local Appearance Editing of Dynamic Neural Radiance Fields

Add code
Jul 24, 2023
Figure 1 for Dyn-E: Local Appearance Editing of Dynamic Neural Radiance Fields
Figure 2 for Dyn-E: Local Appearance Editing of Dynamic Neural Radiance Fields
Figure 3 for Dyn-E: Local Appearance Editing of Dynamic Neural Radiance Fields
Figure 4 for Dyn-E: Local Appearance Editing of Dynamic Neural Radiance Fields
Viaarxiv icon