Picture for Fei Sun

Fei Sun

Jack

Generative AI Beyond LLMs: System Implications of Multi-Modal Generation

Add code
Dec 22, 2023
Figure 1 for Generative AI Beyond LLMs: System Implications of Multi-Modal Generation
Figure 2 for Generative AI Beyond LLMs: System Implications of Multi-Modal Generation
Figure 3 for Generative AI Beyond LLMs: System Implications of Multi-Modal Generation
Figure 4 for Generative AI Beyond LLMs: System Implications of Multi-Modal Generation
Viaarxiv icon

EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything

Add code
Dec 01, 2023
Figure 1 for EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything
Figure 2 for EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything
Figure 3 for EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything
Figure 4 for EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything
Viaarxiv icon

TEA: Test-time Energy Adaptation

Add code
Nov 24, 2023
Figure 1 for TEA: Test-time Energy Adaptation
Figure 2 for TEA: Test-time Energy Adaptation
Figure 3 for TEA: Test-time Energy Adaptation
Figure 4 for TEA: Test-time Energy Adaptation
Viaarxiv icon

Robust Recommender System: A Survey and Future Directions

Add code
Sep 05, 2023
Figure 1 for Robust Recommender System: A Survey and Future Directions
Figure 2 for Robust Recommender System: A Survey and Future Directions
Figure 3 for Robust Recommender System: A Survey and Future Directions
Figure 4 for Robust Recommender System: A Survey and Future Directions
Viaarxiv icon

A Large Language Model Enhanced Conversational Recommender System

Add code
Aug 11, 2023
Viaarxiv icon

Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts

Add code
Jun 08, 2023
Figure 1 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Figure 2 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Figure 3 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Figure 4 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Viaarxiv icon

PDE+: Enhancing Generalization via PDE with Adaptive Distributional Diffusion

Add code
May 25, 2023
Viaarxiv icon

Rethinking GNN-based Entity Alignment on Heterogeneous Knowledge Graphs: New Datasets and A New Method

Add code
Apr 10, 2023
Viaarxiv icon

LegoNet: A Fast and Exact Unlearning Architecture

Add code
Oct 28, 2022
Viaarxiv icon

MILAN: Masked Image Pretraining on Language Assisted Representation

Add code
Aug 15, 2022
Figure 1 for MILAN: Masked Image Pretraining on Language Assisted Representation
Figure 2 for MILAN: Masked Image Pretraining on Language Assisted Representation
Figure 3 for MILAN: Masked Image Pretraining on Language Assisted Representation
Figure 4 for MILAN: Masked Image Pretraining on Language Assisted Representation
Viaarxiv icon