Picture for Robert Dadashi

Robert Dadashi

WARP: On the Benefits of Weight Averaged Rewarded Policies

Add code
Jun 24, 2024
Viaarxiv icon

RecurrentGemma: Moving Past Transformers for Efficient Open Language Models

Add code
Apr 11, 2024
Figure 1 for RecurrentGemma: Moving Past Transformers for Efficient Open Language Models
Figure 2 for RecurrentGemma: Moving Past Transformers for Efficient Open Language Models
Figure 3 for RecurrentGemma: Moving Past Transformers for Efficient Open Language Models
Figure 4 for RecurrentGemma: Moving Past Transformers for Efficient Open Language Models
Viaarxiv icon

Gemma: Open Models Based on Gemini Research and Technology

Add code
Mar 13, 2024
Figure 1 for Gemma: Open Models Based on Gemini Research and Technology
Figure 2 for Gemma: Open Models Based on Gemini Research and Technology
Figure 3 for Gemma: Open Models Based on Gemini Research and Technology
Figure 4 for Gemma: Open Models Based on Gemini Research and Technology
Viaarxiv icon

WARM: On the Benefits of Weight Averaged Reward Models

Add code
Jan 22, 2024
Figure 1 for WARM: On the Benefits of Weight Averaged Reward Models
Figure 2 for WARM: On the Benefits of Weight Averaged Reward Models
Figure 3 for WARM: On the Benefits of Weight Averaged Reward Models
Figure 4 for WARM: On the Benefits of Weight Averaged Reward Models
Viaarxiv icon

Gemini: A Family of Highly Capable Multimodal Models

Add code
Dec 19, 2023
Viaarxiv icon

Offline Reinforcement Learning with On-Policy Q-Function Regularization

Add code
Jul 25, 2023
Figure 1 for Offline Reinforcement Learning with On-Policy Q-Function Regularization
Figure 2 for Offline Reinforcement Learning with On-Policy Q-Function Regularization
Figure 3 for Offline Reinforcement Learning with On-Policy Q-Function Regularization
Figure 4 for Offline Reinforcement Learning with On-Policy Q-Function Regularization
Viaarxiv icon

Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback

Add code
May 31, 2023
Figure 1 for Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback
Figure 2 for Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback
Figure 3 for Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback
Figure 4 for Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback
Viaarxiv icon

Get Back Here: Robust Imitation by Return-to-Distribution Planning

Add code
May 02, 2023
Figure 1 for Get Back Here: Robust Imitation by Return-to-Distribution Planning
Figure 2 for Get Back Here: Robust Imitation by Return-to-Distribution Planning
Figure 3 for Get Back Here: Robust Imitation by Return-to-Distribution Planning
Figure 4 for Get Back Here: Robust Imitation by Return-to-Distribution Planning
Viaarxiv icon

Learning Energy Networks with Generalized Fenchel-Young Losses

Add code
May 19, 2022
Figure 1 for Learning Energy Networks with Generalized Fenchel-Young Losses
Figure 2 for Learning Energy Networks with Generalized Fenchel-Young Losses
Figure 3 for Learning Energy Networks with Generalized Fenchel-Young Losses
Figure 4 for Learning Energy Networks with Generalized Fenchel-Young Losses
Viaarxiv icon

Continuous Control with Action Quantization from Demonstrations

Add code
Oct 19, 2021
Figure 1 for Continuous Control with Action Quantization from Demonstrations
Figure 2 for Continuous Control with Action Quantization from Demonstrations
Figure 3 for Continuous Control with Action Quantization from Demonstrations
Figure 4 for Continuous Control with Action Quantization from Demonstrations
Viaarxiv icon