Alert button
Picture for Satwik Bhattamishra

Satwik Bhattamishra

Alert button

MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations

Add code
Bookmark button
Alert button
Oct 18, 2023
Arkil Patel, Satwik Bhattamishra, Siva Reddy, Dzmitry Bahdanau

Viaarxiv icon

Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions

Add code
Bookmark button
Alert button
Oct 04, 2023
Satwik Bhattamishra, Arkil Patel, Phil Blunsom, Varun Kanade

Viaarxiv icon

Structural Transfer Learning in NL-to-Bash Semantic Parsers

Add code
Bookmark button
Alert button
Jul 31, 2023
Kyle Duffy, Satwik Bhattamishra, Phil Blunsom

Figure 1 for Structural Transfer Learning in NL-to-Bash Semantic Parsers
Figure 2 for Structural Transfer Learning in NL-to-Bash Semantic Parsers
Viaarxiv icon

DynaQuant: Compressing Deep Learning Training Checkpoints via Dynamic Quantization

Add code
Bookmark button
Alert button
Jun 20, 2023
Amey Agrawal, Sameer Reddy, Satwik Bhattamishra, Venkata Prabhakara Sarath Nookala, Vidushi Vashishth, Kexin Rong, Alexey Tumanov

Figure 1 for DynaQuant: Compressing Deep Learning Training Checkpoints via Dynamic Quantization
Figure 2 for DynaQuant: Compressing Deep Learning Training Checkpoints via Dynamic Quantization
Figure 3 for DynaQuant: Compressing Deep Learning Training Checkpoints via Dynamic Quantization
Figure 4 for DynaQuant: Compressing Deep Learning Training Checkpoints via Dynamic Quantization
Viaarxiv icon

Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions

Add code
Bookmark button
Alert button
Nov 22, 2022
Satwik Bhattamishra, Arkil Patel, Varun Kanade, Phil Blunsom

Figure 1 for Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
Figure 2 for Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
Figure 3 for Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
Figure 4 for Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
Viaarxiv icon

Revisiting the Compositional Generalization Abilities of Neural Sequence Models

Add code
Bookmark button
Alert button
Mar 14, 2022
Arkil Patel, Satwik Bhattamishra, Phil Blunsom, Navin Goyal

Figure 1 for Revisiting the Compositional Generalization Abilities of Neural Sequence Models
Figure 2 for Revisiting the Compositional Generalization Abilities of Neural Sequence Models
Figure 3 for Revisiting the Compositional Generalization Abilities of Neural Sequence Models
Figure 4 for Revisiting the Compositional Generalization Abilities of Neural Sequence Models
Viaarxiv icon

Are NLP Models really able to Solve Simple Math Word Problems?

Add code
Bookmark button
Alert button
Mar 12, 2021
Arkil Patel, Satwik Bhattamishra, Navin Goyal

Figure 1 for Are NLP Models really able to Solve Simple Math Word Problems?
Figure 2 for Are NLP Models really able to Solve Simple Math Word Problems?
Figure 3 for Are NLP Models really able to Solve Simple Math Word Problems?
Figure 4 for Are NLP Models really able to Solve Simple Math Word Problems?
Viaarxiv icon

On the Practical Ability of Recurrent Neural Networks to Recognize Hierarchical Languages

Add code
Bookmark button
Alert button
Nov 08, 2020
Satwik Bhattamishra, Kabir Ahuja, Navin Goyal

Figure 1 for On the Practical Ability of Recurrent Neural Networks to Recognize Hierarchical Languages
Figure 2 for On the Practical Ability of Recurrent Neural Networks to Recognize Hierarchical Languages
Figure 3 for On the Practical Ability of Recurrent Neural Networks to Recognize Hierarchical Languages
Figure 4 for On the Practical Ability of Recurrent Neural Networks to Recognize Hierarchical Languages
Viaarxiv icon

On the Ability and Limitations of Transformers to Recognize Formal Languages

Add code
Bookmark button
Alert button
Oct 08, 2020
Satwik Bhattamishra, Kabir Ahuja, Navin Goyal

Figure 1 for On the Ability and Limitations of Transformers to Recognize Formal Languages
Figure 2 for On the Ability and Limitations of Transformers to Recognize Formal Languages
Figure 3 for On the Ability and Limitations of Transformers to Recognize Formal Languages
Figure 4 for On the Ability and Limitations of Transformers to Recognize Formal Languages
Viaarxiv icon

On the Ability of Self-Attention Networks to Recognize Counter Languages

Add code
Bookmark button
Alert button
Sep 23, 2020
Satwik Bhattamishra, Kabir Ahuja, Navin Goyal

Figure 1 for On the Ability of Self-Attention Networks to Recognize Counter Languages
Figure 2 for On the Ability of Self-Attention Networks to Recognize Counter Languages
Figure 3 for On the Ability of Self-Attention Networks to Recognize Counter Languages
Figure 4 for On the Ability of Self-Attention Networks to Recognize Counter Languages
Viaarxiv icon