Picture for Xing Xie

Xing Xie

Microsoft Research Asia

PromptBench: A Unified Library for Evaluation of Large Language Models

Add code
Dec 13, 2023
Viaarxiv icon

Control Risk for Potential Misuse of Artificial Intelligence in Science

Add code
Dec 11, 2023
Figure 1 for Control Risk for Potential Misuse of Artificial Intelligence in Science
Figure 2 for Control Risk for Potential Misuse of Artificial Intelligence in Science
Figure 3 for Control Risk for Potential Misuse of Artificial Intelligence in Science
Figure 4 for Control Risk for Potential Misuse of Artificial Intelligence in Science
Viaarxiv icon

CDEval: A Benchmark for Measuring the Cultural Dimensions of Large Language Models

Add code
Nov 28, 2023
Viaarxiv icon

RecExplainer: Aligning Large Language Models for Recommendation Model Interpretability

Add code
Nov 18, 2023
Figure 1 for RecExplainer: Aligning Large Language Models for Recommendation Model Interpretability
Figure 2 for RecExplainer: Aligning Large Language Models for Recommendation Model Interpretability
Figure 3 for RecExplainer: Aligning Large Language Models for Recommendation Model Interpretability
Figure 4 for RecExplainer: Aligning Large Language Models for Recommendation Model Interpretability
Viaarxiv icon

Knowledge Plugins: Enhancing Large Language Models for Domain-Specific Recommendations

Add code
Nov 16, 2023
Figure 1 for Knowledge Plugins: Enhancing Large Language Models for Domain-Specific Recommendations
Figure 2 for Knowledge Plugins: Enhancing Large Language Models for Domain-Specific Recommendations
Figure 3 for Knowledge Plugins: Enhancing Large Language Models for Domain-Specific Recommendations
Figure 4 for Knowledge Plugins: Enhancing Large Language Models for Domain-Specific Recommendations
Viaarxiv icon

Value FULCRA: Mapping Large Language Models to the Multidimensional Spectrum of Basic Human Values

Add code
Nov 15, 2023
Figure 1 for Value FULCRA: Mapping Large Language Models to the Multidimensional Spectrum of Basic Human Values
Figure 2 for Value FULCRA: Mapping Large Language Models to the Multidimensional Spectrum of Basic Human Values
Figure 3 for Value FULCRA: Mapping Large Language Models to the Multidimensional Spectrum of Basic Human Values
Figure 4 for Value FULCRA: Mapping Large Language Models to the Multidimensional Spectrum of Basic Human Values
Viaarxiv icon

Denevil: Towards Deciphering and Navigating the Ethical Values of Large Language Models via Instruction Learning

Add code
Oct 30, 2023
Figure 1 for Denevil: Towards Deciphering and Navigating the Ethical Values of Large Language Models via Instruction Learning
Figure 2 for Denevil: Towards Deciphering and Navigating the Ethical Values of Large Language Models via Instruction Learning
Figure 3 for Denevil: Towards Deciphering and Navigating the Ethical Values of Large Language Models via Instruction Learning
Figure 4 for Denevil: Towards Deciphering and Navigating the Ethical Values of Large Language Models via Instruction Learning
Viaarxiv icon

CompeteAI: Understanding the Competition Behaviors in Large Language Model-based Agents

Add code
Oct 26, 2023
Viaarxiv icon

Unpacking the Ethical Value Alignment in Big Models

Add code
Oct 26, 2023
Viaarxiv icon

Evaluating General-Purpose AI with Psychometrics

Add code
Oct 25, 2023
Viaarxiv icon