Picture for Varun Kanade

Varun Kanade

Separations in the Representational Capabilities of Transformers and Recurrent Architectures

Add code
Jun 13, 2024
Viaarxiv icon

Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions

Add code
Oct 04, 2023
Viaarxiv icon

Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions

Add code
Nov 22, 2022
Figure 1 for Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
Figure 2 for Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
Figure 3 for Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
Figure 4 for Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
Viaarxiv icon

When are Local Queries Useful for Robust Learning?

Add code
Oct 12, 2022
Viaarxiv icon

Partial Matrix Completion

Add code
Aug 25, 2022
Figure 1 for Partial Matrix Completion
Viaarxiv icon

Beyond Impossibility: Balancing Sufficiency, Separation and Accuracy

Add code
May 24, 2022
Figure 1 for Beyond Impossibility: Balancing Sufficiency, Separation and Accuracy
Figure 2 for Beyond Impossibility: Balancing Sufficiency, Separation and Accuracy
Figure 3 for Beyond Impossibility: Balancing Sufficiency, Separation and Accuracy
Figure 4 for Beyond Impossibility: Balancing Sufficiency, Separation and Accuracy
Viaarxiv icon

Sample Complexity Bounds for Robustly Learning Decision Lists against Evasion Attacks

Add code
May 12, 2022
Viaarxiv icon

Exponential Tail Local Rademacher Complexity Risk Bounds Without the Bernstein Condition

Add code
Feb 23, 2022
Viaarxiv icon

Towards optimally abstaining from prediction

Add code
May 28, 2021
Figure 1 for Towards optimally abstaining from prediction
Viaarxiv icon

Efficient Learning with Arbitrary Covariate Shift

Add code
Feb 15, 2021
Figure 1 for Efficient Learning with Arbitrary Covariate Shift
Viaarxiv icon