Alert button
Picture for Ning Bian

Ning Bian

Alert button

Rule or Story, Which is a Better Commonsense Expression for Talking with Large Language Models?

Add code
Bookmark button
Alert button
Feb 22, 2024
Ning Bian, Xianpei Han, Hongyu Lin, Yaojie Lu, Ben He, Le Sun

Viaarxiv icon

A Drop of Ink may Make a Million Think: The Spread of False Information in Large Language Models

Add code
Bookmark button
Alert button
May 08, 2023
Ning Bian, Peilin Liu, Xianpei Han, Hongyu Lin, Yaojie Lu, Ben He, Le Sun

Figure 1 for A Drop of Ink may Make a Million Think: The Spread of False Information in Large Language Models
Figure 2 for A Drop of Ink may Make a Million Think: The Spread of False Information in Large Language Models
Figure 3 for A Drop of Ink may Make a Million Think: The Spread of False Information in Large Language Models
Figure 4 for A Drop of Ink may Make a Million Think: The Spread of False Information in Large Language Models
Viaarxiv icon

ChatGPT is a Knowledgeable but Inexperienced Solver: An Investigation of Commonsense Problem in Large Language Models

Add code
Bookmark button
Alert button
Mar 29, 2023
Ning Bian, Xianpei Han, Le Sun, Hongyu Lin, Yaojie Lu, Ben He

Figure 1 for ChatGPT is a Knowledgeable but Inexperienced Solver: An Investigation of Commonsense Problem in Large Language Models
Figure 2 for ChatGPT is a Knowledgeable but Inexperienced Solver: An Investigation of Commonsense Problem in Large Language Models
Figure 3 for ChatGPT is a Knowledgeable but Inexperienced Solver: An Investigation of Commonsense Problem in Large Language Models
Figure 4 for ChatGPT is a Knowledgeable but Inexperienced Solver: An Investigation of Commonsense Problem in Large Language Models
Viaarxiv icon

Bridging the Gap between Language Model and Reading Comprehension: Unsupervised MRC via Self-Supervision

Add code
Bookmark button
Alert button
Jul 19, 2021
Ning Bian, Xianpei Han, Bo Chen, Hongyu Lin, Ben He, Le Sun

Figure 1 for Bridging the Gap between Language Model and Reading Comprehension: Unsupervised MRC via Self-Supervision
Figure 2 for Bridging the Gap between Language Model and Reading Comprehension: Unsupervised MRC via Self-Supervision
Figure 3 for Bridging the Gap between Language Model and Reading Comprehension: Unsupervised MRC via Self-Supervision
Figure 4 for Bridging the Gap between Language Model and Reading Comprehension: Unsupervised MRC via Self-Supervision
Viaarxiv icon

Benchmarking Knowledge-Enhanced Commonsense Question Answering via Knowledge-to-Text Transformation

Add code
Bookmark button
Alert button
Jan 05, 2021
Ning Bian, Xianpei Han, Bo Chen, Le Sun

Figure 1 for Benchmarking Knowledge-Enhanced Commonsense Question Answering via Knowledge-to-Text Transformation
Figure 2 for Benchmarking Knowledge-Enhanced Commonsense Question Answering via Knowledge-to-Text Transformation
Figure 3 for Benchmarking Knowledge-Enhanced Commonsense Question Answering via Knowledge-to-Text Transformation
Figure 4 for Benchmarking Knowledge-Enhanced Commonsense Question Answering via Knowledge-to-Text Transformation
Viaarxiv icon

From Bag of Sentences to Document: Distantly Supervised Relation Extraction via Machine Reading Comprehension

Add code
Bookmark button
Alert button
Dec 09, 2020
Lingyong Yan, Xianpei Han, Le Sun, Fangchao Liu, Ning Bian

Figure 1 for From Bag of Sentences to Document: Distantly Supervised Relation Extraction via Machine Reading Comprehension
Figure 2 for From Bag of Sentences to Document: Distantly Supervised Relation Extraction via Machine Reading Comprehension
Figure 3 for From Bag of Sentences to Document: Distantly Supervised Relation Extraction via Machine Reading Comprehension
Figure 4 for From Bag of Sentences to Document: Distantly Supervised Relation Extraction via Machine Reading Comprehension
Viaarxiv icon