Picture for Lin Wang

Lin Wang

Revisit Event Generation Model: Self-Supervised Learning of Event-to-Video Reconstruction with Implicit Neural Representations

Add code
Jul 26, 2024
Viaarxiv icon

Learning Modality-agnostic Representation for Semantic Segmentation from Any Modalities

Add code
Jul 16, 2024
Figure 1 for Learning Modality-agnostic Representation for Semantic Segmentation from Any Modalities
Figure 2 for Learning Modality-agnostic Representation for Semantic Segmentation from Any Modalities
Viaarxiv icon

Centering the Value of Every Modality: Towards Efficient and Resilient Modality-agnostic Semantic Segmentation

Add code
Jul 16, 2024
Viaarxiv icon

LaSe-E2V: Towards Language-guided Semantic-Aware Event-to-Video Reconstruction

Add code
Jul 08, 2024
Figure 1 for LaSe-E2V: Towards Language-guided Semantic-Aware Event-to-Video Reconstruction
Figure 2 for LaSe-E2V: Towards Language-guided Semantic-Aware Event-to-Video Reconstruction
Figure 3 for LaSe-E2V: Towards Language-guided Semantic-Aware Event-to-Video Reconstruction
Figure 4 for LaSe-E2V: Towards Language-guided Semantic-Aware Event-to-Video Reconstruction
Viaarxiv icon

Smart Sampling: Helping from Friendly Neighbors for Decentralized Federated Learning

Add code
Jul 05, 2024
Figure 1 for Smart Sampling: Helping from Friendly Neighbors for Decentralized Federated Learning
Figure 2 for Smart Sampling: Helping from Friendly Neighbors for Decentralized Federated Learning
Figure 3 for Smart Sampling: Helping from Friendly Neighbors for Decentralized Federated Learning
Figure 4 for Smart Sampling: Helping from Friendly Neighbors for Decentralized Federated Learning
Viaarxiv icon

EIT-1M: One Million EEG-Image-Text Pairs for Human Visual-textual Recognition and More

Add code
Jul 02, 2024
Figure 1 for EIT-1M: One Million EEG-Image-Text Pairs for Human Visual-textual Recognition and More
Figure 2 for EIT-1M: One Million EEG-Image-Text Pairs for Human Visual-textual Recognition and More
Figure 3 for EIT-1M: One Million EEG-Image-Text Pairs for Human Visual-textual Recognition and More
Figure 4 for EIT-1M: One Million EEG-Image-Text Pairs for Human Visual-textual Recognition and More
Viaarxiv icon

CLIP the Divergence: Language-guided Unsupervised Domain Adaptation

Add code
Jul 01, 2024
Viaarxiv icon

eMoE-Tracker: Environmental MoE-based Transformer for Robust Event-guided Object Tracking

Add code
Jun 28, 2024
Viaarxiv icon

BiCo-Fusion: Bidirectional Complementary LiDAR-Camera Fusion for Semantic- and Spatial-Aware 3D Object Detection

Add code
Jun 27, 2024
Viaarxiv icon

Any360D: Towards 360 Depth Anything with Unlabeled 360 Data and Möbius Spatial Augmentation

Add code
Jun 19, 2024
Figure 1 for Any360D: Towards 360 Depth Anything with Unlabeled 360 Data and Möbius Spatial Augmentation
Figure 2 for Any360D: Towards 360 Depth Anything with Unlabeled 360 Data and Möbius Spatial Augmentation
Figure 3 for Any360D: Towards 360 Depth Anything with Unlabeled 360 Data and Möbius Spatial Augmentation
Figure 4 for Any360D: Towards 360 Depth Anything with Unlabeled 360 Data and Möbius Spatial Augmentation
Viaarxiv icon