Alert button
Picture for Ahmed Awadallah

Ahmed Awadallah

Alert button

Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences

Add code
Bookmark button
Alert button
Apr 04, 2024
Corby Rosset, Ching-An Cheng, Arindam Mitra, Michael Santacroce, Ahmed Awadallah, Tengyang Xie

Viaarxiv icon

Researchy Questions: A Dataset of Multi-Perspective, Decompositional Questions for LLM Web Agents

Add code
Bookmark button
Alert button
Feb 27, 2024
Corby Rosset, Ho-Lam Chung, Guanghui Qin, Ethan C. Chau, Zhuo Feng, Ahmed Awadallah, Jennifer Neville, Nikhil Rao

Viaarxiv icon

Towards better Human-Agent Alignment: Assessing Task Utility in LLM-Powered Applications

Add code
Bookmark button
Alert button
Feb 22, 2024
Negar Arabzadeh, Julia Kiseleva, Qingyun Wu, Chi Wang, Ahmed Awadallah, Victor Dibia, Adam Fourney, Charles Clarke

Viaarxiv icon

Orca-Math: Unlocking the potential of SLMs in Grade School Math

Add code
Bookmark button
Alert button
Feb 16, 2024
Arindam Mitra, Hamed Khanpour, Corby Rosset, Ahmed Awadallah

Viaarxiv icon

Axiomatic Preference Modeling for Longform Question Answering

Add code
Bookmark button
Alert button
Dec 02, 2023
Corby Rosset, Guoqing Zheng, Victor Dibia, Ahmed Awadallah, Paul Bennett

Viaarxiv icon

Orca 2: Teaching Small Language Models How to Reason

Add code
Bookmark button
Alert button
Nov 21, 2023
Arindam Mitra, Luciano Del Corro, Shweti Mahajan, Andres Codas, Clarisse Simoes, Sahaj Agarwal, Xuxi Chen, Anastasia Razdaibiedina, Erik Jones, Kriti Aggarwal, Hamid Palangi, Guoqing Zheng, Corby Rosset, Hamed Khanpour, Ahmed Awadallah

Viaarxiv icon

Teaching Language Models to Hallucinate Less with Synthetic Tasks

Add code
Bookmark button
Alert button
Oct 10, 2023
Erik Jones, Hamid Palangi, Clarisse Simões, Varun Chandrasekaran, Subhabrata Mukherjee, Arindam Mitra, Ahmed Awadallah, Ece Kamar

Figure 1 for Teaching Language Models to Hallucinate Less with Synthetic Tasks
Figure 2 for Teaching Language Models to Hallucinate Less with Synthetic Tasks
Figure 3 for Teaching Language Models to Hallucinate Less with Synthetic Tasks
Figure 4 for Teaching Language Models to Hallucinate Less with Synthetic Tasks
Viaarxiv icon

SkipDecode: Autoregressive Skip Decoding with Batching and Caching for Efficient LLM Inference

Add code
Bookmark button
Alert button
Jul 05, 2023
Luciano Del Corro, Allie Del Giorno, Sahaj Agarwal, Bin Yu, Ahmed Awadallah, Subhabrata Mukherjee

Figure 1 for SkipDecode: Autoregressive Skip Decoding with Batching and Caching for Efficient LLM Inference
Figure 2 for SkipDecode: Autoregressive Skip Decoding with Batching and Caching for Efficient LLM Inference
Figure 3 for SkipDecode: Autoregressive Skip Decoding with Batching and Caching for Efficient LLM Inference
Figure 4 for SkipDecode: Autoregressive Skip Decoding with Batching and Caching for Efficient LLM Inference
Viaarxiv icon