Alert button
Picture for Hamid Palangi

Hamid Palangi

Alert button

Improving Black-box Robustness with In-Context Rewriting

Add code
Bookmark button
Alert button
Feb 15, 2024
Kyle O'Brien, Nathan Ng, Isha Puri, Jorge Mendez, Hamid Palangi, Yoon Kim, Marzyeh Ghassemi, Thomas Hartvigsen

Viaarxiv icon

Exploring Group and Symmetry Principles in Large Language Models

Add code
Bookmark button
Alert button
Feb 09, 2024
Shima Imani, Hamid Palangi

Viaarxiv icon

A Glitch in the Matrix? Locating and Detecting Language Model Grounding with Fakepedia

Add code
Bookmark button
Alert button
Dec 04, 2023
Giovanni Monea, Maxime Peyrard, Martin Josifoski, Vishrav Chaudhary, Jason Eisner, Emre Kıcıman, Hamid Palangi, Barun Patra, Robert West

Viaarxiv icon

Orca 2: Teaching Small Language Models How to Reason

Add code
Bookmark button
Alert button
Nov 21, 2023
Arindam Mitra, Luciano Del Corro, Shweti Mahajan, Andres Codas, Clarisse Simoes, Sahaj Agarwal, Xuxi Chen, Anastasia Razdaibiedina, Erik Jones, Kriti Aggarwal, Hamid Palangi, Guoqing Zheng, Corby Rosset, Hamed Khanpour, Ahmed Awadallah

Viaarxiv icon

A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications

Add code
Bookmark button
Alert button
Oct 26, 2023
Ahmed Magooda, Alec Helyar, Kyle Jackson, David Sullivan, Chad Atalla, Emily Sheng, Dan Vann, Richard Edgar, Hamid Palangi, Roman Lutz, Hongliang Kong, Vincent Yun, Eslam Kamal, Federico Zarfati, Hanna Wallach, Sarah Bird, Mei Chen

Figure 1 for A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications
Figure 2 for A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications
Figure 3 for A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications
Figure 4 for A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications
Viaarxiv icon

Diversity of Thought Improves Reasoning Abilities of Large Language Models

Add code
Bookmark button
Alert button
Oct 11, 2023
Ranjita Naik, Varun Chandrasekaran, Mert Yuksekgonul, Hamid Palangi, Besmira Nushi

Figure 1 for Diversity of Thought Improves Reasoning Abilities of Large Language Models
Figure 2 for Diversity of Thought Improves Reasoning Abilities of Large Language Models
Figure 3 for Diversity of Thought Improves Reasoning Abilities of Large Language Models
Figure 4 for Diversity of Thought Improves Reasoning Abilities of Large Language Models
Viaarxiv icon

Teaching Language Models to Hallucinate Less with Synthetic Tasks

Add code
Bookmark button
Alert button
Oct 10, 2023
Erik Jones, Hamid Palangi, Clarisse Simões, Varun Chandrasekaran, Subhabrata Mukherjee, Arindam Mitra, Ahmed Awadallah, Ece Kamar

Figure 1 for Teaching Language Models to Hallucinate Less with Synthetic Tasks
Figure 2 for Teaching Language Models to Hallucinate Less with Synthetic Tasks
Figure 3 for Teaching Language Models to Hallucinate Less with Synthetic Tasks
Figure 4 for Teaching Language Models to Hallucinate Less with Synthetic Tasks
Viaarxiv icon

Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models

Add code
Bookmark button
Alert button
Sep 26, 2023
Mert Yuksekgonul, Varun Chandrasekaran, Erik Jones, Suriya Gunasekar, Ranjita Naik, Hamid Palangi, Ece Kamar, Besmira Nushi

Figure 1 for Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models
Figure 2 for Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models
Figure 3 for Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models
Figure 4 for Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models
Viaarxiv icon