Picture for Pei Ke

Pei Ke

Benchmarking Complex Instruction-Following with Multiple Constraints Composition

Add code
Jul 04, 2024
Viaarxiv icon

Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks

Add code
Jul 03, 2024
Figure 1 for Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks
Figure 2 for Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks
Figure 3 for Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks
Figure 4 for Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks
Viaarxiv icon

AutoDetect: Towards a Unified Framework for Automated Weakness Detection in Large Language Models

Add code
Jun 24, 2024
Figure 1 for AutoDetect: Towards a Unified Framework for Automated Weakness Detection in Large Language Models
Figure 2 for AutoDetect: Towards a Unified Framework for Automated Weakness Detection in Large Language Models
Figure 3 for AutoDetect: Towards a Unified Framework for Automated Weakness Detection in Large Language Models
Figure 4 for AutoDetect: Towards a Unified Framework for Automated Weakness Detection in Large Language Models
Viaarxiv icon

Learning Task Decomposition to Assist Humans in Competitive Programming

Add code
Jun 07, 2024
Viaarxiv icon

Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering

Add code
May 23, 2024
Figure 1 for Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering
Figure 2 for Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering
Figure 3 for Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering
Figure 4 for Perception of Knowledge Boundary for Large Language Models through Semi-open-ended Question Answering
Viaarxiv icon

ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors

Add code
Feb 26, 2024
Figure 1 for ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors
Figure 2 for ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors
Figure 3 for ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors
Figure 4 for ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors
Viaarxiv icon

Towards Efficient and Exact Optimization of Language Model Alignment

Add code
Feb 02, 2024
Figure 1 for Towards Efficient and Exact Optimization of Language Model Alignment
Figure 2 for Towards Efficient and Exact Optimization of Language Model Alignment
Figure 3 for Towards Efficient and Exact Optimization of Language Model Alignment
Figure 4 for Towards Efficient and Exact Optimization of Language Model Alignment
Viaarxiv icon

AlignBench: Benchmarking Chinese Alignment of Large Language Models

Add code
Dec 05, 2023
Figure 1 for AlignBench: Benchmarking Chinese Alignment of Large Language Models
Figure 2 for AlignBench: Benchmarking Chinese Alignment of Large Language Models
Figure 3 for AlignBench: Benchmarking Chinese Alignment of Large Language Models
Figure 4 for AlignBench: Benchmarking Chinese Alignment of Large Language Models
Viaarxiv icon

CritiqueLLM: Scaling LLM-as-Critic for Effective and Explainable Evaluation of Large Language Model Generation

Add code
Nov 30, 2023
Figure 1 for CritiqueLLM: Scaling LLM-as-Critic for Effective and Explainable Evaluation of Large Language Model Generation
Figure 2 for CritiqueLLM: Scaling LLM-as-Critic for Effective and Explainable Evaluation of Large Language Model Generation
Figure 3 for CritiqueLLM: Scaling LLM-as-Critic for Effective and Explainable Evaluation of Large Language Model Generation
Figure 4 for CritiqueLLM: Scaling LLM-as-Critic for Effective and Explainable Evaluation of Large Language Model Generation
Viaarxiv icon

Unveiling the Implicit Toxicity in Large Language Models

Add code
Nov 29, 2023
Figure 1 for Unveiling the Implicit Toxicity in Large Language Models
Figure 2 for Unveiling the Implicit Toxicity in Large Language Models
Figure 3 for Unveiling the Implicit Toxicity in Large Language Models
Figure 4 for Unveiling the Implicit Toxicity in Large Language Models
Viaarxiv icon