Picture for Shimei Pan

Shimei Pan

Columbia University

You've Changed: Detecting Modification of Black-Box Large Language Models

Add code
Apr 14, 2025
Figure 1 for You've Changed: Detecting Modification of Black-Box Large Language Models
Figure 2 for You've Changed: Detecting Modification of Black-Box Large Language Models
Figure 3 for You've Changed: Detecting Modification of Black-Box Large Language Models
Figure 4 for You've Changed: Detecting Modification of Black-Box Large Language Models
Viaarxiv icon

LLM-based Corroborating and Refuting Evidence Retrieval for Scientific Claim Verification

Add code
Mar 11, 2025
Viaarxiv icon

GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models

Add code
Jun 20, 2024
Figure 1 for GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models
Figure 2 for GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models
Figure 3 for GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models
Figure 4 for GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models
Viaarxiv icon

RAGged Edges: The Double-Edged Sword of Retrieval-Augmented Chatbots

Add code
Mar 13, 2024
Figure 1 for RAGged Edges: The Double-Edged Sword of Retrieval-Augmented Chatbots
Figure 2 for RAGged Edges: The Double-Edged Sword of Retrieval-Augmented Chatbots
Viaarxiv icon

Teach me with a Whisper: Enhancing Large Language Models for Analyzing Spoken Transcripts using Speech Embeddings

Add code
Nov 13, 2023
Figure 1 for Teach me with a Whisper: Enhancing Large Language Models for Analyzing Spoken Transcripts using Speech Embeddings
Figure 2 for Teach me with a Whisper: Enhancing Large Language Models for Analyzing Spoken Transcripts using Speech Embeddings
Figure 3 for Teach me with a Whisper: Enhancing Large Language Models for Analyzing Spoken Transcripts using Speech Embeddings
Figure 4 for Teach me with a Whisper: Enhancing Large Language Models for Analyzing Spoken Transcripts using Speech Embeddings
Viaarxiv icon

Trapping LLM Hallucinations Using Tagged Context Prompts

Add code
Jun 09, 2023
Figure 1 for Trapping LLM Hallucinations Using Tagged Context Prompts
Figure 2 for Trapping LLM Hallucinations Using Tagged Context Prompts
Figure 3 for Trapping LLM Hallucinations Using Tagged Context Prompts
Figure 4 for Trapping LLM Hallucinations Using Tagged Context Prompts
Viaarxiv icon

The Role of Interactive Visualization in Explaining (Large) NLP Models: from Data to Inference

Add code
Jan 11, 2023
Figure 1 for The Role of Interactive Visualization in Explaining (Large) NLP Models: from Data to Inference
Figure 2 for The Role of Interactive Visualization in Explaining (Large) NLP Models: from Data to Inference
Figure 3 for The Role of Interactive Visualization in Explaining (Large) NLP Models: from Data to Inference
Figure 4 for The Role of Interactive Visualization in Explaining (Large) NLP Models: from Data to Inference
Viaarxiv icon

Fair Inference for Discrete Latent Variable Models

Add code
Sep 15, 2022
Figure 1 for Fair Inference for Discrete Latent Variable Models
Figure 2 for Fair Inference for Discrete Latent Variable Models
Figure 3 for Fair Inference for Discrete Latent Variable Models
Figure 4 for Fair Inference for Discrete Latent Variable Models
Viaarxiv icon

Tell Me Something That Will Help Me Trust You: A Survey of Trust Calibration in Human-Agent Interaction

Add code
May 06, 2022
Figure 1 for Tell Me Something That Will Help Me Trust You: A Survey of Trust Calibration in Human-Agent Interaction
Figure 2 for Tell Me Something That Will Help Me Trust You: A Survey of Trust Calibration in Human-Agent Interaction
Viaarxiv icon

Bias: Friend or Foe? User Acceptance of Gender Stereotypes in Automated Career Recommendations

Add code
Jun 13, 2021
Figure 1 for Bias: Friend or Foe? User Acceptance of Gender Stereotypes in Automated Career Recommendations
Figure 2 for Bias: Friend or Foe? User Acceptance of Gender Stereotypes in Automated Career Recommendations
Figure 3 for Bias: Friend or Foe? User Acceptance of Gender Stereotypes in Automated Career Recommendations
Figure 4 for Bias: Friend or Foe? User Acceptance of Gender Stereotypes in Automated Career Recommendations
Viaarxiv icon