Picture for Mrinmaya Sachan

Mrinmaya Sachan

Book2Dial: Generating Teacher-Student Interactions from Textbooks for Cost-Effective Development of Educational Chatbots

Add code
Mar 05, 2024
Figure 1 for Book2Dial: Generating Teacher-Student Interactions from Textbooks for Cost-Effective Development of Educational Chatbots
Figure 2 for Book2Dial: Generating Teacher-Student Interactions from Textbooks for Cost-Effective Development of Educational Chatbots
Figure 3 for Book2Dial: Generating Teacher-Student Interactions from Textbooks for Cost-Effective Development of Educational Chatbots
Figure 4 for Book2Dial: Generating Teacher-Student Interactions from Textbooks for Cost-Effective Development of Educational Chatbots
Viaarxiv icon

Scaling the Authoring of AutoTutors with Large Language Models

Add code
Feb 27, 2024
Figure 1 for Scaling the Authoring of AutoTutors with Large Language Models
Figure 2 for Scaling the Authoring of AutoTutors with Large Language Models
Figure 3 for Scaling the Authoring of AutoTutors with Large Language Models
Figure 4 for Scaling the Authoring of AutoTutors with Large Language Models
Viaarxiv icon

Calibrating Large Language Models with Sample Consistency

Add code
Feb 21, 2024
Viaarxiv icon

Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals

Add code
Feb 18, 2024
Figure 1 for Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals
Figure 2 for Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals
Figure 3 for Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals
Figure 4 for Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals
Viaarxiv icon

AFaCTA: Assisting the Annotation of Factual Claim Detection with Reliable LLM Annotators

Add code
Feb 16, 2024
Figure 1 for AFaCTA: Assisting the Annotation of Factual Claim Detection with Reliable LLM Annotators
Figure 2 for AFaCTA: Assisting the Annotation of Factual Claim Detection with Reliable LLM Annotators
Figure 3 for AFaCTA: Assisting the Annotation of Factual Claim Detection with Reliable LLM Annotators
Figure 4 for AFaCTA: Assisting the Annotation of Factual Claim Detection with Reliable LLM Annotators
Viaarxiv icon

Do Language Models Exhibit the Same Cognitive Biases in Problem Solving as Human Learners?

Add code
Jan 31, 2024
Viaarxiv icon

CLadder: A Benchmark to Assess Causal Reasoning Capabilities of Language Models

Add code
Dec 07, 2023
Viaarxiv icon

RELIC: Investigating Large Language Model Responses using Self-Consistency

Add code
Nov 28, 2023
Figure 1 for RELIC: Investigating Large Language Model Responses using Self-Consistency
Figure 2 for RELIC: Investigating Large Language Model Responses using Self-Consistency
Figure 3 for RELIC: Investigating Large Language Model Responses using Self-Consistency
Figure 4 for RELIC: Investigating Large Language Model Responses using Self-Consistency
Viaarxiv icon

Navigating the Ocean of Biases: Political Bias Attribution in Language Models via Causal Structures

Add code
Nov 15, 2023
Viaarxiv icon

The ART of LLM Refinement: Ask, Refine, and Trust

Add code
Nov 14, 2023
Figure 1 for The ART of LLM Refinement: Ask, Refine, and Trust
Figure 2 for The ART of LLM Refinement: Ask, Refine, and Trust
Figure 3 for The ART of LLM Refinement: Ask, Refine, and Trust
Figure 4 for The ART of LLM Refinement: Ask, Refine, and Trust
Viaarxiv icon