Picture for Eric Xing

Eric Xing

Carnegie Mellon University

What is Your Data Worth to GPT? LLM-Scale Data Valuation with Influence Functions

Add code
May 22, 2024
Viaarxiv icon

Efficient Test-Time Adaptation of Vision-Language Models

Add code
Mar 27, 2024
Figure 1 for Efficient Test-Time Adaptation of Vision-Language Models
Figure 2 for Efficient Test-Time Adaptation of Vision-Language Models
Figure 3 for Efficient Test-Time Adaptation of Vision-Language Models
Figure 4 for Efficient Test-Time Adaptation of Vision-Language Models
Viaarxiv icon

FreGS: 3D Gaussian Splatting with Progressive Frequency Regularization

Add code
Mar 11, 2024
Figure 1 for FreGS: 3D Gaussian Splatting with Progressive Frequency Regularization
Figure 2 for FreGS: 3D Gaussian Splatting with Progressive Frequency Regularization
Figure 3 for FreGS: 3D Gaussian Splatting with Progressive Frequency Regularization
Figure 4 for FreGS: 3D Gaussian Splatting with Progressive Frequency Regularization
Viaarxiv icon

Follow My Instruction and Spill the Beans: Scalable Data Extraction from Retrieval-Augmented Generation Systems

Add code
Feb 27, 2024
Viaarxiv icon

Confidence Matters: Revisiting Intrinsic Self-Correction Capabilities of Large Language Models

Add code
Feb 27, 2024
Figure 1 for Confidence Matters: Revisiting Intrinsic Self-Correction Capabilities of Large Language Models
Figure 2 for Confidence Matters: Revisiting Intrinsic Self-Correction Capabilities of Large Language Models
Figure 3 for Confidence Matters: Revisiting Intrinsic Self-Correction Capabilities of Large Language Models
Figure 4 for Confidence Matters: Revisiting Intrinsic Self-Correction Capabilities of Large Language Models
Viaarxiv icon

Counterfactual Generation with Identifiability Guarantees

Add code
Feb 23, 2024
Figure 1 for Counterfactual Generation with Identifiability Guarantees
Figure 2 for Counterfactual Generation with Identifiability Guarantees
Figure 3 for Counterfactual Generation with Identifiability Guarantees
Figure 4 for Counterfactual Generation with Identifiability Guarantees
Viaarxiv icon

ALISON: Fast and Effective Stylometric Authorship Obfuscation

Add code
Feb 01, 2024
Viaarxiv icon

TrustLLM: Trustworthiness in Large Language Models

Add code
Jan 25, 2024
Figure 1 for TrustLLM: Trustworthiness in Large Language Models
Figure 2 for TrustLLM: Trustworthiness in Large Language Models
Figure 3 for TrustLLM: Trustworthiness in Large Language Models
Figure 4 for TrustLLM: Trustworthiness in Large Language Models
Viaarxiv icon

Learning to Prompt Segment Anything Models

Add code
Jan 09, 2024
Viaarxiv icon

A Study on the Calibration of In-context Learning

Add code
Dec 11, 2023
Figure 1 for A Study on the Calibration of In-context Learning
Figure 2 for A Study on the Calibration of In-context Learning
Figure 3 for A Study on the Calibration of In-context Learning
Figure 4 for A Study on the Calibration of In-context Learning
Viaarxiv icon