Picture for Hongsheng Li

Hongsheng Li

Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models

Add code
Feb 22, 2024
Figure 1 for Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models
Figure 2 for Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models
Figure 3 for Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models
Figure 4 for Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models
Viaarxiv icon

Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset

Add code
Feb 22, 2024
Figure 1 for Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset
Figure 2 for Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset
Figure 3 for Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset
Figure 4 for Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset
Viaarxiv icon

SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models

Add code
Feb 08, 2024
Figure 1 for SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models
Figure 2 for SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models
Figure 3 for SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models
Figure 4 for SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models
Viaarxiv icon

AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning

Add code
Feb 01, 2024
Figure 1 for AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning
Figure 2 for AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning
Figure 3 for AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning
Figure 4 for AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning
Viaarxiv icon

Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling

Add code
Jan 31, 2024
Figure 1 for Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling
Figure 2 for Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling
Figure 3 for Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling
Figure 4 for Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling
Viaarxiv icon

NODI: Out-Of-Distribution Detection with Noise from Diffusion

Add code
Jan 18, 2024
Viaarxiv icon

MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer

Add code
Jan 18, 2024
Viaarxiv icon

Efficient Deformable ConvNets: Rethinking Dynamic and Sparse Operator for Vision Applications

Add code
Jan 11, 2024
Figure 1 for Efficient Deformable ConvNets: Rethinking Dynamic and Sparse Operator for Vision Applications
Figure 2 for Efficient Deformable ConvNets: Rethinking Dynamic and Sparse Operator for Vision Applications
Figure 3 for Efficient Deformable ConvNets: Rethinking Dynamic and Sparse Operator for Vision Applications
Figure 4 for Efficient Deformable ConvNets: Rethinking Dynamic and Sparse Operator for Vision Applications
Viaarxiv icon

The two-way knowledge interaction interface between humans and neural networks

Add code
Jan 10, 2024
Viaarxiv icon

Flowmind2Digital: The First Comprehensive Flowmind Recognition and Conversion Approach

Add code
Jan 08, 2024
Viaarxiv icon