Alert button
Picture for Haidar Khan

Haidar Khan

Alert button

When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model Leaderboards

Add code
Bookmark button
Alert button
Feb 01, 2024
Norah Alzahrani, Hisham Abdullah Alyahya, Yazeed Alnumay, Sultan Alrashed, Shaykhah Alsubaie, Yusef Almushaykeh, Faisal Mirza, Nouf Alotaibi, Nora Altwairesh, Areeb Alowisheq, M Saiful Bari, Haidar Khan

Viaarxiv icon

Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning

Add code
Bookmark button
Alert button
May 19, 2023
Mustafa Safa Ozdayi, Charith Peris, Jack FitzGerald, Christophe Dupuy, Jimit Majmudar, Haidar Khan, Rahil Parikh, Rahul Gupta

Figure 1 for Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning
Figure 2 for Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning
Figure 3 for Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning
Figure 4 for Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning
Viaarxiv icon

Low-Resource Compositional Semantic Parsing with Concept Pretraining

Add code
Bookmark button
Alert button
Jan 30, 2023
Subendhu Rongali, Mukund Sridhar, Haidar Khan, Konstantine Arkoudas, Wael Hamza, Andrew McCallum

Figure 1 for Low-Resource Compositional Semantic Parsing with Concept Pretraining
Figure 2 for Low-Resource Compositional Semantic Parsing with Concept Pretraining
Figure 3 for Low-Resource Compositional Semantic Parsing with Concept Pretraining
Figure 4 for Low-Resource Compositional Semantic Parsing with Concept Pretraining
Viaarxiv icon

AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model

Add code
Bookmark button
Alert button
Aug 03, 2022
Saleh Soltan, Shankar Ananthakrishnan, Jack FitzGerald, Rahul Gupta, Wael Hamza, Haidar Khan, Charith Peris, Stephen Rawls, Andy Rosenbaum, Anna Rumshisky, Chandana Satya Prakash, Mukund Sridhar, Fabian Triefenbach, Apurv Verma, Gokhan Tur, Prem Natarajan

Figure 1 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 2 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 3 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 4 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Viaarxiv icon

Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems

Add code
Bookmark button
Alert button
Jun 15, 2022
Jack FitzGerald, Shankar Ananthakrishnan, Konstantine Arkoudas, Davide Bernardi, Abhishek Bhagia, Claudio Delli Bovi, Jin Cao, Rakesh Chada, Amit Chauhan, Luoxin Chen, Anurag Dwarakanath, Satyam Dwivedi, Turan Gojayev, Karthik Gopalakrishnan, Thomas Gueudre, Dilek Hakkani-Tur, Wael Hamza, Jonathan Hueser, Kevin Martin Jose, Haidar Khan, Beiye Liu, Jianhua Lu, Alessandro Manzotti, Pradeep Natarajan, Karolina Owczarzak, Gokmen Oz, Enrico Palumbo, Charith Peris, Chandana Satya Prakash, Stephen Rawls, Andy Rosenbaum, Anjali Shenoy, Saleh Soltan, Mukund Harakere Sridhar, Liz Tan, Fabian Triefenbach, Pan Wei, Haiyang Yu, Shuai Zheng, Gokhan Tur, Prem Natarajan

Figure 1 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 2 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 3 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 4 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Viaarxiv icon

Unfreeze with Care: Space-Efficient Fine-Tuning of Semantic Parsing Models

Add code
Bookmark button
Alert button
Mar 05, 2022
Weiqi Sun, Haidar Khan, Nicolas Guenon des Mesnards, Melanie Rubino, Konstantine Arkoudas

Figure 1 for Unfreeze with Care: Space-Efficient Fine-Tuning of Semantic Parsing Models
Figure 2 for Unfreeze with Care: Space-Efficient Fine-Tuning of Semantic Parsing Models
Figure 3 for Unfreeze with Care: Space-Efficient Fine-Tuning of Semantic Parsing Models
Figure 4 for Unfreeze with Care: Space-Efficient Fine-Tuning of Semantic Parsing Models
Viaarxiv icon

RescoreBERT: Discriminative Speech Recognition Rescoring with BERT

Add code
Bookmark button
Alert button
Feb 07, 2022
Liyan Xu, Yile Gu, Jari Kolehmainen, Haidar Khan, Ankur Gandhe, Ariya Rastrow, Andreas Stolcke, Ivan Bulyko

Figure 1 for RescoreBERT: Discriminative Speech Recognition Rescoring with BERT
Figure 2 for RescoreBERT: Discriminative Speech Recognition Rescoring with BERT
Figure 3 for RescoreBERT: Discriminative Speech Recognition Rescoring with BERT
Figure 4 for RescoreBERT: Discriminative Speech Recognition Rescoring with BERT
Viaarxiv icon

Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models

Add code
Bookmark button
Alert button
Jul 08, 2021
Daniel Park, Haidar Khan, Azer Khan, Alex Gittens, Bülent Yener

Figure 1 for Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models
Figure 2 for Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models
Figure 3 for Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models
Figure 4 for Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models
Viaarxiv icon

Using multiple ASR hypotheses to boost i18n NLU performance

Add code
Bookmark button
Alert button
Dec 14, 2020
Charith Peris, Gokmen Oz, Khadige Abboud, Venkata sai Varada, Prashan Wanigasekara, Haidar Khan

Figure 1 for Using multiple ASR hypotheses to boost i18n NLU performance
Figure 2 for Using multiple ASR hypotheses to boost i18n NLU performance
Figure 3 for Using multiple ASR hypotheses to boost i18n NLU performance
Figure 4 for Using multiple ASR hypotheses to boost i18n NLU performance
Viaarxiv icon