Picture for Sinong Wang

Sinong Wang

Sid

Boosting LLM Reasoning via Spontaneous Self-Correction

Add code
Jun 07, 2025
Viaarxiv icon

High Accuracy, Less Talk (HALT): Reliable LLMs through Capability-Aligned Finetuning

Add code
Jun 04, 2025
Viaarxiv icon

Reinforcement Learning from User Feedback

Add code
May 20, 2025
Viaarxiv icon

Learning Auxiliary Tasks Improves Reference-Free Hallucination Detection in Open-Domain Long-Form Generation

Add code
May 18, 2025
Viaarxiv icon

Think Smarter not Harder: Adaptive Reasoning with Inference Aware Optimization

Add code
Jan 31, 2025
Viaarxiv icon

Step-KTO: Optimizing Mathematical Reasoning through Stepwise Binary Feedback

Add code
Jan 18, 2025
Viaarxiv icon

Beyond Reward Hacking: Causal Rewards for Large Language Model Alignment

Add code
Jan 16, 2025
Figure 1 for Beyond Reward Hacking: Causal Rewards for Large Language Model Alignment
Figure 2 for Beyond Reward Hacking: Causal Rewards for Large Language Model Alignment
Figure 3 for Beyond Reward Hacking: Causal Rewards for Large Language Model Alignment
Figure 4 for Beyond Reward Hacking: Causal Rewards for Large Language Model Alignment
Viaarxiv icon

Improving Model Factuality with Fine-grained Critique-based Evaluator

Add code
Oct 24, 2024
Viaarxiv icon

Multi-IF: Benchmarking LLMs on Multi-Turn and Multilingual Instructions Following

Add code
Oct 21, 2024
Viaarxiv icon

Preference Optimization with Multi-Sample Comparisons

Add code
Oct 16, 2024
Viaarxiv icon