Picture for Cristian Canton Ferrer

Cristian Canton Ferrer

Fairness-Aware Meta-Learning via Nash Bargaining

Add code
Jun 11, 2024
Figure 1 for Fairness-Aware Meta-Learning via Nash Bargaining
Figure 2 for Fairness-Aware Meta-Learning via Nash Bargaining
Figure 3 for Fairness-Aware Meta-Learning via Nash Bargaining
Figure 4 for Fairness-Aware Meta-Learning via Nash Bargaining
Viaarxiv icon

Towards Red Teaming in Multimodal and Multilingual Translation

Add code
Jan 29, 2024
Viaarxiv icon

On Responsible Machine Learning Datasets with Fairness, Privacy, and Regulatory Norms

Add code
Oct 31, 2023
Figure 1 for On Responsible Machine Learning Datasets with Fairness, Privacy, and Regulatory Norms
Figure 2 for On Responsible Machine Learning Datasets with Fairness, Privacy, and Regulatory Norms
Figure 3 for On Responsible Machine Learning Datasets with Fairness, Privacy, and Regulatory Norms
Figure 4 for On Responsible Machine Learning Datasets with Fairness, Privacy, and Regulatory Norms
Viaarxiv icon

VPA: Fully Test-Time Visual Prompt Adaptation

Add code
Sep 26, 2023
Figure 1 for VPA: Fully Test-Time Visual Prompt Adaptation
Figure 2 for VPA: Fully Test-Time Visual Prompt Adaptation
Figure 3 for VPA: Fully Test-Time Visual Prompt Adaptation
Figure 4 for VPA: Fully Test-Time Visual Prompt Adaptation
Viaarxiv icon

Code Llama: Open Foundation Models for Code

Add code
Aug 25, 2023
Figure 1 for Code Llama: Open Foundation Models for Code
Figure 2 for Code Llama: Open Foundation Models for Code
Figure 3 for Code Llama: Open Foundation Models for Code
Figure 4 for Code Llama: Open Foundation Models for Code
Viaarxiv icon

Llama 2: Open Foundation and Fine-Tuned Chat Models

Add code
Jul 19, 2023
Figure 1 for Llama 2: Open Foundation and Fine-Tuned Chat Models
Figure 2 for Llama 2: Open Foundation and Fine-Tuned Chat Models
Figure 3 for Llama 2: Open Foundation and Fine-Tuned Chat Models
Figure 4 for Llama 2: Open Foundation and Fine-Tuned Chat Models
Viaarxiv icon

Data-Driven but Privacy-Conscious: Pedestrian Dataset De-identification via Full-Body Person Synthesis

Add code
Jun 22, 2023
Figure 1 for Data-Driven but Privacy-Conscious: Pedestrian Dataset De-identification via Full-Body Person Synthesis
Figure 2 for Data-Driven but Privacy-Conscious: Pedestrian Dataset De-identification via Full-Body Person Synthesis
Figure 3 for Data-Driven but Privacy-Conscious: Pedestrian Dataset De-identification via Full-Body Person Synthesis
Figure 4 for Data-Driven but Privacy-Conscious: Pedestrian Dataset De-identification via Full-Body Person Synthesis
Viaarxiv icon

The Casual Conversations v2 Dataset

Add code
Mar 08, 2023
Figure 1 for The Casual Conversations v2 Dataset
Figure 2 for The Casual Conversations v2 Dataset
Figure 3 for The Casual Conversations v2 Dataset
Figure 4 for The Casual Conversations v2 Dataset
Viaarxiv icon

A Whac-A-Mole Dilemma: Shortcuts Come in Multiples Where Mitigating One Amplifies Others

Add code
Dec 09, 2022
Figure 1 for A Whac-A-Mole Dilemma: Shortcuts Come in Multiples Where Mitigating One Amplifies Others
Figure 2 for A Whac-A-Mole Dilemma: Shortcuts Come in Multiples Where Mitigating One Amplifies Others
Figure 3 for A Whac-A-Mole Dilemma: Shortcuts Come in Multiples Where Mitigating One Amplifies Others
Figure 4 for A Whac-A-Mole Dilemma: Shortcuts Come in Multiples Where Mitigating One Amplifies Others
Viaarxiv icon

Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness

Add code
Nov 10, 2022
Figure 1 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 2 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 3 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 4 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Viaarxiv icon