Picture for Federico Bianchi

Federico Bianchi

TextGrad: Automatic "Differentiation" via Text

Add code
Jun 11, 2024
Viaarxiv icon

Large Language Models are Vulnerable to Bait-and-Switch Attacks for Generating Harmful Content

Add code
Feb 21, 2024
Figure 1 for Large Language Models are Vulnerable to Bait-and-Switch Attacks for Generating Harmful Content
Viaarxiv icon

How Well Can LLMs Negotiate? NegotiationArena Platform and Analysis

Add code
Feb 08, 2024
Figure 1 for How Well Can LLMs Negotiate? NegotiationArena Platform and Analysis
Figure 2 for How Well Can LLMs Negotiate? NegotiationArena Platform and Analysis
Figure 3 for How Well Can LLMs Negotiate? NegotiationArena Platform and Analysis
Figure 4 for How Well Can LLMs Negotiate? NegotiationArena Platform and Analysis
Viaarxiv icon

Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions

Add code
Sep 25, 2023
Figure 1 for Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions
Figure 2 for Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions
Figure 3 for Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions
Figure 4 for Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions
Viaarxiv icon

Vehicle-to-Grid and ancillary services:a profitability analysis under uncertainty

Add code
Sep 20, 2023
Figure 1 for Vehicle-to-Grid and ancillary services:a profitability analysis under uncertainty
Figure 2 for Vehicle-to-Grid and ancillary services:a profitability analysis under uncertainty
Figure 3 for Vehicle-to-Grid and ancillary services:a profitability analysis under uncertainty
Viaarxiv icon

XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models

Add code
Aug 02, 2023
Figure 1 for XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models
Figure 2 for XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models
Figure 3 for XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models
Viaarxiv icon

E Pluribus Unum: Guidelines on Multi-Objective Evaluation of Recommender Systems

Add code
Apr 20, 2023
Figure 1 for E Pluribus Unum: Guidelines on Multi-Objective Evaluation of Recommender Systems
Figure 2 for E Pluribus Unum: Guidelines on Multi-Objective Evaluation of Recommender Systems
Figure 3 for E Pluribus Unum: Guidelines on Multi-Objective Evaluation of Recommender Systems
Figure 4 for E Pluribus Unum: Guidelines on Multi-Objective Evaluation of Recommender Systems
Viaarxiv icon

EvalRS 2023. Well-Rounded Recommender Systems For Real-World Deployments

Add code
Apr 19, 2023
Figure 1 for EvalRS 2023. Well-Rounded Recommender Systems For Real-World Deployments
Viaarxiv icon

Beyond Digital "Echo Chambers": The Role of Viewpoint Diversity in Political Discussion

Add code
Dec 18, 2022
Figure 1 for Beyond Digital "Echo Chambers": The Role of Viewpoint Diversity in Political Discussion
Figure 2 for Beyond Digital "Echo Chambers": The Role of Viewpoint Diversity in Political Discussion
Figure 3 for Beyond Digital "Echo Chambers": The Role of Viewpoint Diversity in Political Discussion
Figure 4 for Beyond Digital "Echo Chambers": The Role of Viewpoint Diversity in Political Discussion
Viaarxiv icon

SocioProbe: What, When, and Where Language Models Learn about Sociodemographics

Add code
Nov 08, 2022
Figure 1 for SocioProbe: What, When, and Where Language Models Learn about Sociodemographics
Figure 2 for SocioProbe: What, When, and Where Language Models Learn about Sociodemographics
Figure 3 for SocioProbe: What, When, and Where Language Models Learn about Sociodemographics
Figure 4 for SocioProbe: What, When, and Where Language Models Learn about Sociodemographics
Viaarxiv icon