Picture for Zhanke Zhou

Zhanke Zhou

From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium

Add code
Jun 09, 2025
Viaarxiv icon

From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information?

Add code
Jun 09, 2025
Viaarxiv icon

SATBench: Benchmarking LLMs' Logical Reasoning via Automated Puzzle Generation from SAT Formulas

Add code
May 20, 2025
Viaarxiv icon

Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models

Add code
Mar 28, 2025
Viaarxiv icon

Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond

Add code
Feb 26, 2025
Viaarxiv icon

Noisy Test-Time Adaptation in Vision-Language Models

Add code
Feb 20, 2025
Viaarxiv icon

Eliciting Causal Abilities in Large Language Models for Reasoning Tasks

Add code
Dec 19, 2024
Viaarxiv icon

Physics Reasoner: Knowledge-Augmented Reasoning for Solving Physics Problems with Large Language Models

Add code
Dec 18, 2024
Viaarxiv icon

Model Inversion Attacks: A Survey of Approaches and Countermeasures

Add code
Nov 15, 2024
Viaarxiv icon

Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?

Add code
Oct 31, 2024
Figure 1 for Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?
Figure 2 for Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?
Figure 3 for Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?
Figure 4 for Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?
Viaarxiv icon