Picture for Shenghua Liu

Shenghua Liu

a1: Steep Test-time Scaling Law via Environment Augmented Generation

Add code
Apr 20, 2025
Viaarxiv icon

Innate Reasoning is Not Enough: In-Context Learning Enhances Reasoning Large Language Models with Less Overthinking

Add code
Mar 25, 2025
Viaarxiv icon

Parameters vs. Context: Fine-Grained Control of Knowledge Reliance in Language Models

Add code
Mar 20, 2025
Viaarxiv icon

Context-DPO: Aligning Language Models for Context-Faithfulness

Add code
Dec 18, 2024
Viaarxiv icon

HiddenGuard: Fine-Grained Safe Generation with Specialized Representation Router

Add code
Oct 03, 2024
Figure 1 for HiddenGuard: Fine-Grained Safe Generation with Specialized Representation Router
Figure 2 for HiddenGuard: Fine-Grained Safe Generation with Specialized Representation Router
Figure 3 for HiddenGuard: Fine-Grained Safe Generation with Specialized Representation Router
Figure 4 for HiddenGuard: Fine-Grained Safe Generation with Specialized Representation Router
Viaarxiv icon

StruEdit: Structured Outputs Enable the Fast and Accurate Knowledge Editing for Large Language Models

Add code
Sep 16, 2024
Figure 1 for StruEdit: Structured Outputs Enable the Fast and Accurate Knowledge Editing for Large Language Models
Figure 2 for StruEdit: Structured Outputs Enable the Fast and Accurate Knowledge Editing for Large Language Models
Figure 3 for StruEdit: Structured Outputs Enable the Fast and Accurate Knowledge Editing for Large Language Models
Figure 4 for StruEdit: Structured Outputs Enable the Fast and Accurate Knowledge Editing for Large Language Models
Viaarxiv icon

Adaptive Token Biaser: Knowledge Editing via Biasing Key Entities

Add code
Jun 18, 2024
Figure 1 for Adaptive Token Biaser: Knowledge Editing via Biasing Key Entities
Figure 2 for Adaptive Token Biaser: Knowledge Editing via Biasing Key Entities
Figure 3 for Adaptive Token Biaser: Knowledge Editing via Biasing Key Entities
Figure 4 for Adaptive Token Biaser: Knowledge Editing via Biasing Key Entities
Viaarxiv icon

"Not Aligned" is Not "Malicious": Being Careful about Hallucinations of Large Language Models' Jailbreak

Add code
Jun 17, 2024
Viaarxiv icon

Decoding by Contrasting Knowledge: Enhancing LLMs' Confidence on Edited Facts

Add code
May 21, 2024
Figure 1 for Decoding by Contrasting Knowledge: Enhancing LLMs' Confidence on Edited Facts
Figure 2 for Decoding by Contrasting Knowledge: Enhancing LLMs' Confidence on Edited Facts
Figure 3 for Decoding by Contrasting Knowledge: Enhancing LLMs' Confidence on Edited Facts
Figure 4 for Decoding by Contrasting Knowledge: Enhancing LLMs' Confidence on Edited Facts
Viaarxiv icon

Is Factuality Decoding a Free Lunch for LLMs? Evaluation on Knowledge Editing Benchmark

Add code
Mar 30, 2024
Viaarxiv icon