Picture for Minlie Huang

Minlie Huang

EJ

Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering

Add code
May 23, 2024
Figure 1 for Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering
Figure 2 for Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering
Figure 3 for Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering
Figure 4 for Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering
Viaarxiv icon

Weak-to-Strong Extrapolation Expedites Alignment

Add code
Apr 25, 2024
Figure 1 for Weak-to-Strong Extrapolation Expedites Alignment
Figure 2 for Weak-to-Strong Extrapolation Expedites Alignment
Figure 3 for Weak-to-Strong Extrapolation Expedites Alignment
Figure 4 for Weak-to-Strong Extrapolation Expedites Alignment
Viaarxiv icon

360°REA: Towards A Reusable Experience Accumulation with 360° Assessment for Multi-Agent System

Add code
Apr 08, 2024
Figure 1 for 360°REA: Towards A Reusable Experience Accumulation with 360° Assessment for Multi-Agent System
Figure 2 for 360°REA: Towards A Reusable Experience Accumulation with 360° Assessment for Multi-Agent System
Figure 3 for 360°REA: Towards A Reusable Experience Accumulation with 360° Assessment for Multi-Agent System
Figure 4 for 360°REA: Towards A Reusable Experience Accumulation with 360° Assessment for Multi-Agent System
Viaarxiv icon

ChatGLM-RLHF: Practices of Aligning Large Language Models with Human Feedback

Add code
Apr 03, 2024
Figure 1 for ChatGLM-RLHF: Practices of Aligning Large Language Models with Human Feedback
Figure 2 for ChatGLM-RLHF: Practices of Aligning Large Language Models with Human Feedback
Figure 3 for ChatGLM-RLHF: Practices of Aligning Large Language Models with Human Feedback
Figure 4 for ChatGLM-RLHF: Practices of Aligning Large Language Models with Human Feedback
Viaarxiv icon

Towards Optimal Learning of Language Models

Add code
Mar 03, 2024
Viaarxiv icon

LLM-based Privacy Data Augmentation Guided by Knowledge Distillation with a Distribution Tutor for Medical Text Classification

Add code
Feb 26, 2024
Figure 1 for LLM-based Privacy Data Augmentation Guided by Knowledge Distillation with a Distribution Tutor for Medical Text Classification
Figure 2 for LLM-based Privacy Data Augmentation Guided by Knowledge Distillation with a Distribution Tutor for Medical Text Classification
Figure 3 for LLM-based Privacy Data Augmentation Guided by Knowledge Distillation with a Distribution Tutor for Medical Text Classification
Figure 4 for LLM-based Privacy Data Augmentation Guided by Knowledge Distillation with a Distribution Tutor for Medical Text Classification
Viaarxiv icon

ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors

Add code
Feb 26, 2024
Figure 1 for ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors
Figure 2 for ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors
Figure 3 for ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors
Figure 4 for ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors
Viaarxiv icon

From Noise to Clarity: Unraveling the Adversarial Suffix of Large Language Model Attacks via Translation of Text Embeddings

Add code
Feb 25, 2024
Figure 1 for From Noise to Clarity: Unraveling the Adversarial Suffix of Large Language Model Attacks via Translation of Text Embeddings
Figure 2 for From Noise to Clarity: Unraveling the Adversarial Suffix of Large Language Model Attacks via Translation of Text Embeddings
Figure 3 for From Noise to Clarity: Unraveling the Adversarial Suffix of Large Language Model Attacks via Translation of Text Embeddings
Figure 4 for From Noise to Clarity: Unraveling the Adversarial Suffix of Large Language Model Attacks via Translation of Text Embeddings
Viaarxiv icon

ToMBench: Benchmarking Theory of Mind in Large Language Models

Add code
Feb 23, 2024
Figure 1 for ToMBench: Benchmarking Theory of Mind in Large Language Models
Figure 2 for ToMBench: Benchmarking Theory of Mind in Large Language Models
Figure 3 for ToMBench: Benchmarking Theory of Mind in Large Language Models
Figure 4 for ToMBench: Benchmarking Theory of Mind in Large Language Models
Viaarxiv icon

EmoBench: Evaluating the Emotional Intelligence of Large Language Models

Add code
Feb 19, 2024
Figure 1 for EmoBench: Evaluating the Emotional Intelligence of Large Language Models
Figure 2 for EmoBench: Evaluating the Emotional Intelligence of Large Language Models
Figure 3 for EmoBench: Evaluating the Emotional Intelligence of Large Language Models
Figure 4 for EmoBench: Evaluating the Emotional Intelligence of Large Language Models
Viaarxiv icon