Picture for Ang Li

Ang Li

Inertial Confinement Fusion Forecasting via LLMs

Add code
Jul 15, 2024
Viaarxiv icon

SplitLoRA: A Split Parameter-Efficient Fine-Tuning Framework for Large Language Models

Add code
Jul 01, 2024
Viaarxiv icon

Mobile-Bench: An Evaluation Benchmark for LLM-based Mobile Agents

Add code
Jul 01, 2024
Viaarxiv icon

On Scaling Up 3D Gaussian Splatting Training

Add code
Jun 26, 2024
Viaarxiv icon

Forget but Recall: Incremental Latent Rectification in Continual Learning

Add code
Jun 25, 2024
Viaarxiv icon

What Matters in Transformers? Not All Attention is Needed

Add code
Jun 22, 2024
Viaarxiv icon

PID: Prompt-Independent Data Protection Against Latent Diffusion Models

Add code
Jun 14, 2024
Viaarxiv icon

Demystifying the Compression of Mixture-of-Experts Through a Unified Framework

Add code
Jun 04, 2024
Figure 1 for Demystifying the Compression of Mixture-of-Experts Through a Unified Framework
Figure 2 for Demystifying the Compression of Mixture-of-Experts Through a Unified Framework
Figure 3 for Demystifying the Compression of Mixture-of-Experts Through a Unified Framework
Figure 4 for Demystifying the Compression of Mixture-of-Experts Through a Unified Framework
Viaarxiv icon

A New Solution for MU-MISO Symbol-Level Precoding: Extrapolation and Deep Unfolding

Add code
May 26, 2024
Figure 1 for A New Solution for MU-MISO Symbol-Level Precoding: Extrapolation and Deep Unfolding
Figure 2 for A New Solution for MU-MISO Symbol-Level Precoding: Extrapolation and Deep Unfolding
Figure 3 for A New Solution for MU-MISO Symbol-Level Precoding: Extrapolation and Deep Unfolding
Figure 4 for A New Solution for MU-MISO Symbol-Level Precoding: Extrapolation and Deep Unfolding
Viaarxiv icon

Causality in the Can: Diet Coke's Impact on Fatness

Add code
May 17, 2024
Viaarxiv icon