Picture for Junxian He

Junxian He

On the Universal Truthfulness Hyperplane Inside LLMs

Add code
Jul 11, 2024
Viaarxiv icon

Belief Revision: The Adaptability of Large Language Models Reasoning

Add code
Jun 28, 2024
Figure 1 for Belief Revision: The Adaptability of Large Language Models Reasoning
Figure 2 for Belief Revision: The Adaptability of Large Language Models Reasoning
Figure 3 for Belief Revision: The Adaptability of Large Language Models Reasoning
Figure 4 for Belief Revision: The Adaptability of Large Language Models Reasoning
Viaarxiv icon

IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce

Add code
Jun 14, 2024
Figure 1 for IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce
Figure 2 for IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce
Figure 3 for IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce
Figure 4 for IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce
Viaarxiv icon

Compression Represents Intelligence Linearly

Add code
Apr 15, 2024
Figure 1 for Compression Represents Intelligence Linearly
Figure 2 for Compression Represents Intelligence Linearly
Figure 3 for Compression Represents Intelligence Linearly
Figure 4 for Compression Represents Intelligence Linearly
Viaarxiv icon

In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation

Add code
Mar 12, 2024
Figure 1 for In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Figure 2 for In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Figure 3 for In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Figure 4 for In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Viaarxiv icon

Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models

Add code
Feb 05, 2024
Viaarxiv icon

AgentBoard: An Analytical Evaluation Board of Multi-turn LLM Agents

Add code
Jan 24, 2024
Viaarxiv icon

GeoGalactica: A Scientific Large Language Model in Geoscience

Add code
Dec 31, 2023
Viaarxiv icon

A Survey of Reasoning with Foundation Models

Add code
Dec 26, 2023
Figure 1 for A Survey of Reasoning with Foundation Models
Figure 2 for A Survey of Reasoning with Foundation Models
Figure 3 for A Survey of Reasoning with Foundation Models
Figure 4 for A Survey of Reasoning with Foundation Models
Viaarxiv icon

What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning

Add code
Dec 25, 2023
Figure 1 for What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning
Figure 2 for What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning
Figure 3 for What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning
Figure 4 for What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning
Viaarxiv icon