Alert button
Picture for Hamid Palangi

Hamid Palangi

Alert button

Evaluating Cognitive Maps and Planning in Large Language Models with CogEval

Add code
Bookmark button
Alert button
Sep 25, 2023
Ida Momennejad, Hosein Hasanbeig, Felipe Vieira, Hiteshi Sharma, Robert Osazuwa Ness, Nebojsa Jojic, Hamid Palangi, Jonathan Larson

Viaarxiv icon

Improving the Reusability of Pre-trained Language Models in Real-world Applications

Add code
Bookmark button
Alert button
Aug 08, 2023
Somayeh Ghanbarzadeh, Hamid Palangi, Yan Huang, Radames Cruz Moreno, Hamed Khanpour

Figure 1 for Improving the Reusability of Pre-trained Language Models in Real-world Applications
Figure 2 for Improving the Reusability of Pre-trained Language Models in Real-world Applications
Figure 3 for Improving the Reusability of Pre-trained Language Models in Real-world Applications
Figure 4 for Improving the Reusability of Pre-trained Language Models in Real-world Applications
Viaarxiv icon

Improving Pre-trained Language Models' Generalization

Add code
Bookmark button
Alert button
Aug 06, 2023
Somayeh Ghanbarzadeh, Hamid Palangi, Yan Huang, Radames Cruz Moreno, Hamed Khanpour

Figure 1 for Improving Pre-trained Language Models' Generalization
Figure 2 for Improving Pre-trained Language Models' Generalization
Figure 3 for Improving Pre-trained Language Models' Generalization
Figure 4 for Improving Pre-trained Language Models' Generalization
Viaarxiv icon

Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models

Add code
Bookmark button
Alert button
Jul 20, 2023
Somayeh Ghanbarzadeh, Yan Huang, Hamid Palangi, Radames Cruz Moreno, Hamed Khanpour

Figure 1 for Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models
Figure 2 for Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models
Figure 3 for Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models
Figure 4 for Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models
Viaarxiv icon

Orca: Progressive Learning from Complex Explanation Traces of GPT-4

Add code
Bookmark button
Alert button
Jun 05, 2023
Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, Ahmed Awadallah

Figure 1 for Orca: Progressive Learning from Complex Explanation Traces of GPT-4
Figure 2 for Orca: Progressive Learning from Complex Explanation Traces of GPT-4
Figure 3 for Orca: Progressive Learning from Complex Explanation Traces of GPT-4
Figure 4 for Orca: Progressive Learning from Complex Explanation Traces of GPT-4
Viaarxiv icon

Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning

Add code
Bookmark button
Alert button
Apr 08, 2023
Yu Yang, Besmira Nushi, Hamid Palangi, Baharan Mirzasoleiman

Figure 1 for Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning
Figure 2 for Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning
Figure 3 for Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning
Figure 4 for Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning
Viaarxiv icon

Sparks of Artificial General Intelligence: Early experiments with GPT-4

Add code
Bookmark button
Alert button
Mar 27, 2023
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, Yi Zhang

Figure 1 for Sparks of Artificial General Intelligence: Early experiments with GPT-4
Figure 2 for Sparks of Artificial General Intelligence: Early experiments with GPT-4
Figure 3 for Sparks of Artificial General Intelligence: Early experiments with GPT-4
Figure 4 for Sparks of Artificial General Intelligence: Early experiments with GPT-4
Viaarxiv icon

An Empirical Study of Metrics to Measure Representational Harms in Pre-Trained Language Models

Add code
Bookmark button
Alert button
Jan 22, 2023
Saghar Hosseini, Hamid Palangi, Ahmed Hassan Awadallah

Figure 1 for An Empirical Study of Metrics to Measure Representational Harms in Pre-Trained Language Models
Figure 2 for An Empirical Study of Metrics to Measure Representational Harms in Pre-Trained Language Models
Figure 3 for An Empirical Study of Metrics to Measure Representational Harms in Pre-Trained Language Models
Figure 4 for An Empirical Study of Metrics to Measure Representational Harms in Pre-Trained Language Models
Viaarxiv icon