Alert button
Picture for Roi Reichart

Roi Reichart

Alert button

The Colorful Future of LLMs: Evaluating and Improving LLMs as Emotional Supporters for Queer Youth

Add code
Bookmark button
Alert button
Feb 19, 2024
Shir Lissak, Nitay Calderon, Geva Shenkman, Yaakov Ophir, Eyal Fruchter, Anat Brunstein Klomek, Roi Reichart

Viaarxiv icon

Systematic Biases in LLM Simulations of Debates

Add code
Bookmark button
Alert button
Feb 06, 2024
Amir Taubenfeld, Yaniv Dover, Roi Reichart, Ariel Goldstein

Viaarxiv icon

Can Large Language Models Replace Economic Choice Prediction Labs?

Add code
Bookmark button
Alert button
Feb 01, 2024
Eilam Shapira, Omer Madmon, Roi Reichart, Moshe Tennenholtz

Viaarxiv icon

Decoding Stumpers: Large Language Models vs. Human Problem-Solvers

Add code
Bookmark button
Alert button
Oct 25, 2023
Alon Goldstein, Miriam Havin, Roi Reichart, Ariel Goldstein

Viaarxiv icon

The Temporal Structure of Language Processing in the Human Brain Corresponds to The Layered Hierarchy of Deep Language Models

Add code
Bookmark button
Alert button
Oct 11, 2023
Ariel Goldstein, Eric Ham, Mariano Schain, Samuel Nastase, Zaid Zada, Avigail Dabush, Bobbi Aubrey, Harshvardhan Gazula, Amir Feder, Werner K Doyle, Sasha Devore, Patricia Dugan, Daniel Friedman, Roi Reichart, Michael Brenner, Avinatan Hassidim, Orrin Devinsky, Adeen Flinker, Omer Levy, Uri Hasson

Viaarxiv icon

Navigating Cultural Chasms: Exploring and Unlocking the Cultural POV of Text-To-Image Models

Add code
Bookmark button
Alert button
Oct 03, 2023
Mor Ventura, Eyal Ben-David, Anna Korhonen, Roi Reichart

Viaarxiv icon

Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals

Add code
Bookmark button
Alert button
Oct 01, 2023
Yair Gat, Nitay Calderon, Amir Feder, Alexander Chapanin, Amit Sharma, Roi Reichart

Figure 1 for Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals
Figure 2 for Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals
Figure 3 for Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals
Figure 4 for Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals
Viaarxiv icon

Measuring the Robustness of Natural Language Processing Models to Domain Shifts

Add code
Bookmark button
Alert button
May 31, 2023
Nitay Calderon, Naveh Porat, Eyal Ben-David, Zorik Gekhman, Nadav Oved, Roi Reichart

Figure 1 for Measuring the Robustness of Natural Language Processing Models to Domain Shifts
Figure 2 for Measuring the Robustness of Natural Language Processing Models to Domain Shifts
Figure 3 for Measuring the Robustness of Natural Language Processing Models to Domain Shifts
Figure 4 for Measuring the Robustness of Natural Language Processing Models to Domain Shifts
Viaarxiv icon

Human Choice Prediction in Language-based Non-Cooperative Games: Simulation-based Off-Policy Evaluation

Add code
Bookmark button
Alert button
May 23, 2023
Eilam Shapira, Reut Apel, Moshe Tennenholtz, Roi Reichart

Figure 1 for Human Choice Prediction in Language-based Non-Cooperative Games: Simulation-based Off-Policy Evaluation
Figure 2 for Human Choice Prediction in Language-based Non-Cooperative Games: Simulation-based Off-Policy Evaluation
Figure 3 for Human Choice Prediction in Language-based Non-Cooperative Games: Simulation-based Off-Policy Evaluation
Figure 4 for Human Choice Prediction in Language-based Non-Cooperative Games: Simulation-based Off-Policy Evaluation
Viaarxiv icon