Alert button
Picture for Sahand Negahban

Sahand Negahban

Alert button

Tree-Projected Gradient Descent for Estimating Gradient-Sparse Parameters on Graphs

Add code
Bookmark button
Alert button
May 31, 2020
Sheng Xu, Zhou Fan, Sahand Negahban

Figure 1 for Tree-Projected Gradient Descent for Estimating Gradient-Sparse Parameters on Graphs
Figure 2 for Tree-Projected Gradient Descent for Estimating Gradient-Sparse Parameters on Graphs
Figure 3 for Tree-Projected Gradient Descent for Estimating Gradient-Sparse Parameters on Graphs
Viaarxiv icon

Alternating Linear Bandits for Online Matrix-Factorization Recommendation

Add code
Bookmark button
Alert button
Oct 22, 2018
Hamid Dadkhahi, Sahand Negahban

Figure 1 for Alternating Linear Bandits for Online Matrix-Factorization Recommendation
Figure 2 for Alternating Linear Bandits for Online Matrix-Factorization Recommendation
Figure 3 for Alternating Linear Bandits for Online Matrix-Factorization Recommendation
Figure 4 for Alternating Linear Bandits for Online Matrix-Factorization Recommendation
Viaarxiv icon

Deep supervised feature selection using Stochastic Gates

Add code
Bookmark button
Alert button
Oct 09, 2018
Yutaro Yamada, Ofir Lindenbaum, Sahand Negahban, Yuval Kluger

Figure 1 for Deep supervised feature selection using Stochastic Gates
Figure 2 for Deep supervised feature selection using Stochastic Gates
Figure 3 for Deep supervised feature selection using Stochastic Gates
Figure 4 for Deep supervised feature selection using Stochastic Gates
Viaarxiv icon

Super-resolution estimation of cyclic arrival rates

Add code
Bookmark button
Alert button
Jun 05, 2018
Ningyuan Chen, Donald K. K. Lee, Sahand Negahban

Figure 1 for Super-resolution estimation of cyclic arrival rates
Figure 2 for Super-resolution estimation of cyclic arrival rates
Figure 3 for Super-resolution estimation of cyclic arrival rates
Figure 4 for Super-resolution estimation of cyclic arrival rates
Viaarxiv icon

Minimax Estimation of Bandable Precision Matrices

Add code
Bookmark button
Alert button
Oct 19, 2017
Addison Hu, Sahand Negahban

Figure 1 for Minimax Estimation of Bandable Precision Matrices
Viaarxiv icon

Restricted Strong Convexity Implies Weak Submodularity

Add code
Bookmark button
Alert button
Oct 12, 2017
Ethan R. Elenberg, Rajiv Khanna, Alexandros G. Dimakis, Sahand Negahban

Figure 1 for Restricted Strong Convexity Implies Weak Submodularity
Figure 2 for Restricted Strong Convexity Implies Weak Submodularity
Viaarxiv icon

Learning from Comparisons and Choices

Add code
Bookmark button
Alert button
Apr 24, 2017
Sahand Negahban, Sewoong Oh, Kiran K. Thekumparampil, Jiaming Xu

Figure 1 for Learning from Comparisons and Choices
Figure 2 for Learning from Comparisons and Choices
Figure 3 for Learning from Comparisons and Choices
Figure 4 for Learning from Comparisons and Choices
Viaarxiv icon

Scalable Greedy Feature Selection via Weak Submodularity

Add code
Bookmark button
Alert button
Mar 08, 2017
Rajiv Khanna, Ethan Elenberg, Alexandros G. Dimakis, Sahand Negahban, Joydeep Ghosh

Figure 1 for Scalable Greedy Feature Selection via Weak Submodularity
Figure 2 for Scalable Greedy Feature Selection via Weak Submodularity
Figure 3 for Scalable Greedy Feature Selection via Weak Submodularity
Figure 4 for Scalable Greedy Feature Selection via Weak Submodularity
Viaarxiv icon

On Approximation Guarantees for Greedy Low Rank Optimization

Add code
Bookmark button
Alert button
Mar 08, 2017
Rajiv Khanna, Ethan Elenberg, Alexandros G. Dimakis, Sahand Negahban

Figure 1 for On Approximation Guarantees for Greedy Low Rank Optimization
Figure 2 for On Approximation Guarantees for Greedy Low Rank Optimization
Viaarxiv icon

Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization

Add code
Bookmark button
Alert button
Jan 16, 2016
Uri Shaham, Yutaro Yamada, Sahand Negahban

Figure 1 for Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization
Figure 2 for Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization
Figure 3 for Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization
Figure 4 for Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization
Viaarxiv icon