Alert button
Picture for Ameet Deshpande

Ameet Deshpande

Alert button

Toxicity in ChatGPT: Analyzing Persona-assigned Language Models

Add code
Bookmark button
Alert button
Apr 11, 2023
Ameet Deshpande, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, Karthik Narasimhan

Figure 1 for Toxicity in ChatGPT: Analyzing Persona-assigned Language Models
Figure 2 for Toxicity in ChatGPT: Analyzing Persona-assigned Language Models
Figure 3 for Toxicity in ChatGPT: Analyzing Persona-assigned Language Models
Figure 4 for Toxicity in ChatGPT: Analyzing Persona-assigned Language Models
Viaarxiv icon

MUX-PLMs: Pre-training Language Models with Data Multiplexing

Add code
Bookmark button
Alert button
Feb 24, 2023
Vishvak Murahari, Ameet Deshpande, Carlos E. Jimenez, Izhak Shafran, Mingqiu Wang, Yuan Cao, Karthik Narasimhan

Figure 1 for MUX-PLMs: Pre-training Language Models with Data Multiplexing
Figure 2 for MUX-PLMs: Pre-training Language Models with Data Multiplexing
Figure 3 for MUX-PLMs: Pre-training Language Models with Data Multiplexing
Figure 4 for MUX-PLMs: Pre-training Language Models with Data Multiplexing
Viaarxiv icon

SemSup-XC: Semantic Supervision for Zero and Few-shot Extreme Classification

Add code
Bookmark button
Alert button
Jan 26, 2023
Pranjal Aggarwal, Ameet Deshpande, Karthik Narasimhan

Figure 1 for SemSup-XC: Semantic Supervision for Zero and Few-shot Extreme Classification
Figure 2 for SemSup-XC: Semantic Supervision for Zero and Few-shot Extreme Classification
Figure 3 for SemSup-XC: Semantic Supervision for Zero and Few-shot Extreme Classification
Figure 4 for SemSup-XC: Semantic Supervision for Zero and Few-shot Extreme Classification
Viaarxiv icon

SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers

Add code
Bookmark button
Alert button
Nov 29, 2022
Ameet Deshpande, Md Arafat Sultan, Anthony Ferritto, Ashwin Kalyan, Karthik Narasimhan, Avirup Sil

Figure 1 for SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
Figure 2 for SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
Figure 3 for SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
Figure 4 for SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers
Viaarxiv icon

ALIGN-MLM: Word Embedding Alignment is Crucial for Multilingual Pre-training

Add code
Bookmark button
Alert button
Nov 15, 2022
Henry Tang, Ameet Deshpande, Karthik Narasimhan

Figure 1 for ALIGN-MLM: Word Embedding Alignment is Crucial for Multilingual Pre-training
Figure 2 for ALIGN-MLM: Word Embedding Alignment is Crucial for Multilingual Pre-training
Figure 3 for ALIGN-MLM: Word Embedding Alignment is Crucial for Multilingual Pre-training
Figure 4 for ALIGN-MLM: Word Embedding Alignment is Crucial for Multilingual Pre-training
Viaarxiv icon

Semantic Supervision: Enabling Generalization over Output Spaces

Add code
Bookmark button
Alert button
Mar 15, 2022
Austin W. Hanjie, Ameet Deshpande, Karthik Narasimhan

Figure 1 for Semantic Supervision: Enabling Generalization over Output Spaces
Figure 2 for Semantic Supervision: Enabling Generalization over Output Spaces
Figure 3 for Semantic Supervision: Enabling Generalization over Output Spaces
Figure 4 for Semantic Supervision: Enabling Generalization over Output Spaces
Viaarxiv icon

When is BERT Multilingual? Isolating Crucial Ingredients for Cross-lingual Transfer

Add code
Bookmark button
Alert button
Nov 05, 2021
Ameet Deshpande, Partha Talukdar, Karthik Narasimhan

Figure 1 for When is BERT Multilingual? Isolating Crucial Ingredients for Cross-lingual Transfer
Figure 2 for When is BERT Multilingual? Isolating Crucial Ingredients for Cross-lingual Transfer
Figure 3 for When is BERT Multilingual? Isolating Crucial Ingredients for Cross-lingual Transfer
Figure 4 for When is BERT Multilingual? Isolating Crucial Ingredients for Cross-lingual Transfer
Viaarxiv icon

Guiding Attention for Self-Supervised Learning with Transformers

Add code
Bookmark button
Alert button
Oct 06, 2020
Ameet Deshpande, Karthik Narasimhan

Figure 1 for Guiding Attention for Self-Supervised Learning with Transformers
Figure 2 for Guiding Attention for Self-Supervised Learning with Transformers
Figure 3 for Guiding Attention for Self-Supervised Learning with Transformers
Figure 4 for Guiding Attention for Self-Supervised Learning with Transformers
Viaarxiv icon