Picture for Enyu Zhou

Enyu Zhou

What's Wrong with Your Code Generated by Large Language Models? An Extensive Study

Add code
Jul 08, 2024
Viaarxiv icon

SafeAligner: Safety Alignment against Jailbreak Attacks via Response Disparity Guidance

Add code
Jun 26, 2024
Viaarxiv icon

Aligning Large Language Models from Self-Reference AI Feedback with one General Principle

Add code
Jun 17, 2024
Figure 1 for Aligning Large Language Models from Self-Reference AI Feedback with one General Principle
Figure 2 for Aligning Large Language Models from Self-Reference AI Feedback with one General Principle
Figure 3 for Aligning Large Language Models from Self-Reference AI Feedback with one General Principle
Figure 4 for Aligning Large Language Models from Self-Reference AI Feedback with one General Principle
Viaarxiv icon

MetaRM: Shifted Distributions Alignment via Meta-Learning

Add code
May 01, 2024
Viaarxiv icon

StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback

Add code
Feb 05, 2024
Viaarxiv icon

Secrets of RLHF in Large Language Models Part II: Reward Modeling

Add code
Jan 12, 2024
Figure 1 for Secrets of RLHF in Large Language Models Part II: Reward Modeling
Figure 2 for Secrets of RLHF in Large Language Models Part II: Reward Modeling
Figure 3 for Secrets of RLHF in Large Language Models Part II: Reward Modeling
Figure 4 for Secrets of RLHF in Large Language Models Part II: Reward Modeling
Viaarxiv icon

LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment

Add code
Dec 18, 2023
Figure 1 for LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment
Figure 2 for LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment
Figure 3 for LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment
Figure 4 for LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment
Viaarxiv icon

RealBehavior: A Framework for Faithfully Characterizing Foundation Models' Human-like Behavior Mechanisms

Add code
Oct 17, 2023
Figure 1 for RealBehavior: A Framework for Faithfully Characterizing Foundation Models' Human-like Behavior Mechanisms
Figure 2 for RealBehavior: A Framework for Faithfully Characterizing Foundation Models' Human-like Behavior Mechanisms
Figure 3 for RealBehavior: A Framework for Faithfully Characterizing Foundation Models' Human-like Behavior Mechanisms
Figure 4 for RealBehavior: A Framework for Faithfully Characterizing Foundation Models' Human-like Behavior Mechanisms
Viaarxiv icon

The Rise and Potential of Large Language Model Based Agents: A Survey

Add code
Sep 19, 2023
Figure 1 for The Rise and Potential of Large Language Model Based Agents: A Survey
Figure 2 for The Rise and Potential of Large Language Model Based Agents: A Survey
Figure 3 for The Rise and Potential of Large Language Model Based Agents: A Survey
Figure 4 for The Rise and Potential of Large Language Model Based Agents: A Survey
Viaarxiv icon

Global Matching with Overlapping Attention for Optical Flow Estimation

Add code
Mar 21, 2022
Figure 1 for Global Matching with Overlapping Attention for Optical Flow Estimation
Figure 2 for Global Matching with Overlapping Attention for Optical Flow Estimation
Figure 3 for Global Matching with Overlapping Attention for Optical Flow Estimation
Figure 4 for Global Matching with Overlapping Attention for Optical Flow Estimation
Viaarxiv icon