Alert button
Picture for Sham Kakade

Sham Kakade

Alert button

Anti-Concentrated Confidence Bonuses for Scalable Exploration

Add code
Bookmark button
Alert button
Oct 21, 2021
Jordan T. Ash, Cyril Zhang, Surbhi Goel, Akshay Krishnamurthy, Sham Kakade

Figure 1 for Anti-Concentrated Confidence Bonuses for Scalable Exploration
Figure 2 for Anti-Concentrated Confidence Bonuses for Scalable Exploration
Figure 3 for Anti-Concentrated Confidence Bonuses for Scalable Exploration
Figure 4 for Anti-Concentrated Confidence Bonuses for Scalable Exploration
Viaarxiv icon

Inductive Biases and Variable Creation in Self-Attention Mechanisms

Add code
Bookmark button
Alert button
Oct 19, 2021
Benjamin L. Edelman, Surbhi Goel, Sham Kakade, Cyril Zhang

Figure 1 for Inductive Biases and Variable Creation in Self-Attention Mechanisms
Figure 2 for Inductive Biases and Variable Creation in Self-Attention Mechanisms
Figure 3 for Inductive Biases and Variable Creation in Self-Attention Mechanisms
Figure 4 for Inductive Biases and Variable Creation in Self-Attention Mechanisms
Viaarxiv icon

Sparsity in Partially Controllable Linear Systems

Add code
Bookmark button
Alert button
Oct 12, 2021
Yonathan Efroni, Sham Kakade, Akshay Krishnamurthy, Cyril Zhang

Figure 1 for Sparsity in Partially Controllable Linear Systems
Figure 2 for Sparsity in Partially Controllable Linear Systems
Viaarxiv icon

Koopman Spectrum Nonlinear Regulator and Provably Efficient Online Learning

Add code
Bookmark button
Alert button
Jun 30, 2021
Motoya Ohnishi, Isao Ishikawa, Kendall Lowrey, Masahiro Ikeda, Sham Kakade, Yoshinobu Kawahara

Figure 1 for Koopman Spectrum Nonlinear Regulator and Provably Efficient Online Learning
Figure 2 for Koopman Spectrum Nonlinear Regulator and Provably Efficient Online Learning
Figure 3 for Koopman Spectrum Nonlinear Regulator and Provably Efficient Online Learning
Figure 4 for Koopman Spectrum Nonlinear Regulator and Provably Efficient Online Learning
Viaarxiv icon

Gone Fishing: Neural Active Learning with Fisher Embeddings

Add code
Bookmark button
Alert button
Jun 17, 2021
Jordan T. Ash, Surbhi Goel, Akshay Krishnamurthy, Sham Kakade

Figure 1 for Gone Fishing: Neural Active Learning with Fisher Embeddings
Figure 2 for Gone Fishing: Neural Active Learning with Fisher Embeddings
Figure 3 for Gone Fishing: Neural Active Learning with Fisher Embeddings
Figure 4 for Gone Fishing: Neural Active Learning with Fisher Embeddings
Viaarxiv icon

LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes

Add code
Bookmark button
Alert button
Jun 02, 2021
Aditya Kusupati, Matthew Wallingford, Vivek Ramanujan, Raghav Somani, Jae Sung Park, Krishna Pillutla, Prateek Jain, Sham Kakade, Ali Farhadi

Figure 1 for LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes
Figure 2 for LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes
Figure 3 for LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes
Figure 4 for LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes
Viaarxiv icon

Robust and Differentially Private Mean Estimation

Add code
Bookmark button
Alert button
Feb 18, 2021
Xiyang Liu, Weihao Kong, Sham Kakade, Sewoong Oh

Figure 1 for Robust and Differentially Private Mean Estimation
Figure 2 for Robust and Differentially Private Mean Estimation
Figure 3 for Robust and Differentially Private Mean Estimation
Viaarxiv icon

How Important is the Train-Validation Split in Meta-Learning?

Add code
Bookmark button
Alert button
Oct 12, 2020
Yu Bai, Minshuo Chen, Pan Zhou, Tuo Zhao, Jason D. Lee, Sham Kakade, Huan Wang, Caiming Xiong

Figure 1 for How Important is the Train-Validation Split in Meta-Learning?
Figure 2 for How Important is the Train-Validation Split in Meta-Learning?
Figure 3 for How Important is the Train-Validation Split in Meta-Learning?
Viaarxiv icon

PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning

Add code
Bookmark button
Alert button
Aug 13, 2020
Alekh Agarwal, Mikael Henaff, Sham Kakade, Wen Sun

Figure 1 for PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning
Figure 2 for PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning
Figure 3 for PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning
Figure 4 for PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning
Viaarxiv icon