Picture for Ling Liu

Ling Liu

Language-Vision Planner and Executor for Text-to-Visual Reasoning

Add code
Jun 09, 2025
Viaarxiv icon

Residual Cross-Attention Transformer-Based Multi-User CSI Feedback with Deep Joint Source-Channel Coding

Add code
May 26, 2025
Viaarxiv icon

MolLangBench: A Comprehensive Benchmark for Language-Prompted Molecular Structure Recognition, Editing, and Generation

Add code
May 21, 2025
Viaarxiv icon

Adverseness vs. Equilibrium: Exploring Graph Adversarial Resilience through Dynamic Equilibrium

Add code
May 20, 2025
Viaarxiv icon

Multi-Agent Reinforcement Learning with Focal Diversity Optimization

Add code
Feb 06, 2025
Viaarxiv icon

Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing Guardrail Moderation

Add code
Jan 29, 2025
Viaarxiv icon

From Intention To Implementation: Automating Biomedical Research via LLMs

Add code
Dec 12, 2024
Viaarxiv icon

$H^3$Fusion: Helpful, Harmless, Honest Fusion of Aligned LLMs

Add code
Nov 26, 2024
Figure 1 for $H^3$Fusion: Helpful, Harmless, Honest Fusion of Aligned LLMs
Figure 2 for $H^3$Fusion: Helpful, Harmless, Honest Fusion of Aligned LLMs
Figure 3 for $H^3$Fusion: Helpful, Harmless, Honest Fusion of Aligned LLMs
Figure 4 for $H^3$Fusion: Helpful, Harmless, Honest Fusion of Aligned LLMs
Viaarxiv icon

Unraveling and Mitigating Safety Alignment Degradation of Vision-Language Models

Add code
Oct 11, 2024
Figure 1 for Unraveling and Mitigating Safety Alignment Degradation of Vision-Language Models
Figure 2 for Unraveling and Mitigating Safety Alignment Degradation of Vision-Language Models
Figure 3 for Unraveling and Mitigating Safety Alignment Degradation of Vision-Language Models
Figure 4 for Unraveling and Mitigating Safety Alignment Degradation of Vision-Language Models
Viaarxiv icon

LLM-TOPLA: Efficient LLM Ensemble by Maximising Diversity

Add code
Oct 04, 2024
Viaarxiv icon