Picture for Xingwu Sun

Xingwu Sun

Mitigating Hallucination in Multimodal Large Language Model via Hallucination-targeted Direct Preference Optimization

Add code
Nov 15, 2024
Viaarxiv icon

More Expressive Attention with Negative Weights

Add code
Nov 14, 2024
Viaarxiv icon

Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent

Add code
Nov 05, 2024
Viaarxiv icon

Exploring Forgetting in Large Language Model Pre-Training

Add code
Oct 22, 2024
Viaarxiv icon

Continuous Speech Tokenizer in Text To Speech

Add code
Oct 22, 2024
Viaarxiv icon

Lossless KV Cache Compression to 2%

Add code
Oct 20, 2024
Viaarxiv icon

RosePO: Aligning LLM-based Recommenders with Human Values

Add code
Oct 16, 2024
Figure 1 for RosePO: Aligning LLM-based Recommenders with Human Values
Figure 2 for RosePO: Aligning LLM-based Recommenders with Human Values
Figure 3 for RosePO: Aligning LLM-based Recommenders with Human Values
Figure 4 for RosePO: Aligning LLM-based Recommenders with Human Values
Viaarxiv icon

Magnifier Prompt: Tackling Multimodal Hallucination via Extremely Simple Instructions

Add code
Oct 15, 2024
Viaarxiv icon

Language Models "Grok" to Copy

Add code
Sep 14, 2024
Viaarxiv icon

Negative Sampling in Recommendation: A Survey and Future Directions

Add code
Sep 11, 2024
Figure 1 for Negative Sampling in Recommendation: A Survey and Future Directions
Figure 2 for Negative Sampling in Recommendation: A Survey and Future Directions
Figure 3 for Negative Sampling in Recommendation: A Survey and Future Directions
Figure 4 for Negative Sampling in Recommendation: A Survey and Future Directions
Viaarxiv icon