Picture for Tianyu Pang

Tianyu Pang

Imperceptible Jailbreaking against Large Language Models

Add code
Oct 06, 2025
Viaarxiv icon

Language Models Can Learn from Verbal Feedback Without Scalar Rewards

Add code
Sep 26, 2025
Viaarxiv icon

Variational Reasoning for Language Models

Add code
Sep 26, 2025
Viaarxiv icon

Why LLM Safety Guardrails Collapse After Fine-tuning: A Similarity Analysis Between Alignment and Fine-tuning Datasets

Add code
Jun 05, 2025
Viaarxiv icon

Fostering Video Reasoning via Next-Event Prediction

Add code
May 28, 2025
Viaarxiv icon

Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment

Add code
May 27, 2025
Viaarxiv icon

Reinforcing General Reasoning without Verifiers

Add code
May 27, 2025
Viaarxiv icon

Lifelong Safety Alignment for Language Models

Add code
May 26, 2025
Figure 1 for Lifelong Safety Alignment for Language Models
Figure 2 for Lifelong Safety Alignment for Language Models
Figure 3 for Lifelong Safety Alignment for Language Models
Figure 4 for Lifelong Safety Alignment for Language Models
Viaarxiv icon

QuickVideo: Real-Time Long Video Understanding with System Algorithm Co-Design

Add code
May 22, 2025
Viaarxiv icon

BanditSpec: Adaptive Speculative Decoding via Bandit Algorithms

Add code
May 21, 2025
Viaarxiv icon