Alert button
Picture for Mehrdad Mahdavi

Mehrdad Mahdavi

Alert button

Learning Distributionally Robust Models at Scale via Composite Optimization

Add code
Bookmark button
Alert button
Mar 17, 2022
Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, Amin Karbasi

Figure 1 for Learning Distributionally Robust Models at Scale via Composite Optimization
Figure 2 for Learning Distributionally Robust Models at Scale via Composite Optimization
Viaarxiv icon

Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks

Add code
Bookmark button
Alert button
Dec 07, 2021
Morteza Ramezani, Weilin Cong, Mehrdad Mahdavi, Mahmut T. Kandemir, Anand Sivasubramaniam

Figure 1 for Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks
Figure 2 for Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks
Figure 3 for Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks
Figure 4 for Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks
Viaarxiv icon

Dynamic Graph Representation Learning via Graph Transformer Networks

Add code
Bookmark button
Alert button
Nov 19, 2021
Weilin Cong, Yanhong Wu, Yuandong Tian, Mengting Gu, Yinglong Xia, Mehrdad Mahdavi, Chun-cheng Jason Chen

Figure 1 for Dynamic Graph Representation Learning via Graph Transformer Networks
Figure 2 for Dynamic Graph Representation Learning via Graph Transformer Networks
Figure 3 for Dynamic Graph Representation Learning via Graph Transformer Networks
Figure 4 for Dynamic Graph Representation Learning via Graph Transformer Networks
Viaarxiv icon

On Provable Benefits of Depth in Training Graph Convolutional Networks

Add code
Bookmark button
Alert button
Oct 28, 2021
Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi

Figure 1 for On Provable Benefits of Depth in Training Graph Convolutional Networks
Figure 2 for On Provable Benefits of Depth in Training Graph Convolutional Networks
Figure 3 for On Provable Benefits of Depth in Training Graph Convolutional Networks
Figure 4 for On Provable Benefits of Depth in Training Graph Convolutional Networks
Viaarxiv icon

Meta-learning with an Adaptive Task Scheduler

Add code
Bookmark button
Alert button
Oct 26, 2021
Huaxiu Yao, Yu Wang, Ying Wei, Peilin Zhao, Mehrdad Mahdavi, Defu Lian, Chelsea Finn

Figure 1 for Meta-learning with an Adaptive Task Scheduler
Figure 2 for Meta-learning with an Adaptive Task Scheduler
Figure 3 for Meta-learning with an Adaptive Task Scheduler
Figure 4 for Meta-learning with an Adaptive Task Scheduler
Viaarxiv icon

Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time

Add code
Bookmark button
Alert button
Jul 22, 2021
Yuyang Deng, Mehrdad Mahdavi

Figure 1 for Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time
Figure 2 for Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time
Viaarxiv icon

Pareto Efficient Fairness in Supervised Learning: From Extraction to Tracing

Add code
Bookmark button
Alert button
Apr 04, 2021
Mohammad Mahdi Kamani, Rana Forsati, James Z. Wang, Mehrdad Mahdavi

Figure 1 for Pareto Efficient Fairness in Supervised Learning: From Extraction to Tracing
Figure 2 for Pareto Efficient Fairness in Supervised Learning: From Extraction to Tracing
Figure 3 for Pareto Efficient Fairness in Supervised Learning: From Extraction to Tracing
Figure 4 for Pareto Efficient Fairness in Supervised Learning: From Extraction to Tracing
Viaarxiv icon

On the Importance of Sampling in Learning Graph Convolutional Networks

Add code
Bookmark button
Alert button
Mar 03, 2021
Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi

Figure 1 for On the Importance of Sampling in Learning Graph Convolutional Networks
Figure 2 for On the Importance of Sampling in Learning Graph Convolutional Networks
Figure 3 for On the Importance of Sampling in Learning Graph Convolutional Networks
Figure 4 for On the Importance of Sampling in Learning Graph Convolutional Networks
Viaarxiv icon

Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency

Add code
Bookmark button
Alert button
Feb 25, 2021
Yuyang Deng, Mehrdad Mahdavi

Figure 1 for Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
Figure 2 for Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
Figure 3 for Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
Viaarxiv icon