Picture for Ronghua Li

Ronghua Li

Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?

Add code
May 19, 2025
Viaarxiv icon

Rethinking Graph Out-Of-Distribution Generalization: A Learnable Random Walk Perspective

Add code
May 09, 2025
Viaarxiv icon

Rethinking Graph Structure Learning in the Era of LLMs

Add code
Mar 27, 2025
Viaarxiv icon

Alignment-Aware Model Extraction Attacks on Large Language Models

Add code
Sep 04, 2024
Figure 1 for Alignment-Aware Model Extraction Attacks on Large Language Models
Figure 2 for Alignment-Aware Model Extraction Attacks on Large Language Models
Figure 3 for Alignment-Aware Model Extraction Attacks on Large Language Models
Figure 4 for Alignment-Aware Model Extraction Attacks on Large Language Models
Viaarxiv icon