Picture for Liangming Pan

Liangming Pan

Perils of Self-Feedback: Self-Bias Amplifies in Large Language Models

Add code
Feb 18, 2024
Viaarxiv icon

Understanding the Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation

Add code
Feb 05, 2024
Viaarxiv icon

Tweets to Citations: Unveiling the Impact of Social Media Influencers on AI Research Visibility

Add code
Jan 24, 2024
Viaarxiv icon

Efficient Online Data Mixing For Language Model Pre-Training

Add code
Dec 05, 2023
Viaarxiv icon

Factcheck-GPT: End-to-End Fine-Grained Document-Level Fact-Checking and Correction of LLM Output

Add code
Nov 16, 2023
Figure 1 for Factcheck-GPT: End-to-End Fine-Grained Document-Level Fact-Checking and Correction of LLM Output
Figure 2 for Factcheck-GPT: End-to-End Fine-Grained Document-Level Fact-Checking and Correction of LLM Output
Figure 3 for Factcheck-GPT: End-to-End Fine-Grained Document-Level Fact-Checking and Correction of LLM Output
Figure 4 for Factcheck-GPT: End-to-End Fine-Grained Document-Level Fact-Checking and Correction of LLM Output
Viaarxiv icon

A Survey on Detection of LLMs-Generated Content

Add code
Oct 24, 2023
Viaarxiv icon

MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language Models

Add code
Oct 19, 2023
Viaarxiv icon

QACHECK: A Demonstration System for Question-Guided Multi-Hop Fact-Checking

Add code
Oct 11, 2023
Viaarxiv icon

Towards Verifiable Generation: A Benchmark for Knowledge-aware Language Model Attribution

Add code
Oct 09, 2023
Viaarxiv icon

FOLLOWUPQG: Towards Information-Seeking Follow-up Question Generation

Add code
Sep 19, 2023
Viaarxiv icon