Picture for Mehrdad Mahdavi

Mehrdad Mahdavi

Michigan State University

Efficiently Forgetting What You Have Learned in Graph Representation Learning via Projection

Add code
Feb 17, 2023
Viaarxiv icon

Tight Analysis of Extra-gradient and Optimistic Gradient Methods For Nonconvex Minimax Problems

Add code
Oct 17, 2022
Figure 1 for Tight Analysis of Extra-gradient and Optimistic Gradient Methods For Nonconvex Minimax Problems
Figure 2 for Tight Analysis of Extra-gradient and Optimistic Gradient Methods For Nonconvex Minimax Problems
Viaarxiv icon

Learning Distributionally Robust Models at Scale via Composite Optimization

Add code
Mar 17, 2022
Figure 1 for Learning Distributionally Robust Models at Scale via Composite Optimization
Figure 2 for Learning Distributionally Robust Models at Scale via Composite Optimization
Viaarxiv icon

Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks

Add code
Dec 07, 2021
Figure 1 for Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks
Figure 2 for Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks
Figure 3 for Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks
Figure 4 for Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks
Viaarxiv icon

Dynamic Graph Representation Learning via Graph Transformer Networks

Add code
Nov 19, 2021
Figure 1 for Dynamic Graph Representation Learning via Graph Transformer Networks
Figure 2 for Dynamic Graph Representation Learning via Graph Transformer Networks
Figure 3 for Dynamic Graph Representation Learning via Graph Transformer Networks
Figure 4 for Dynamic Graph Representation Learning via Graph Transformer Networks
Viaarxiv icon

On Provable Benefits of Depth in Training Graph Convolutional Networks

Add code
Oct 28, 2021
Figure 1 for On Provable Benefits of Depth in Training Graph Convolutional Networks
Figure 2 for On Provable Benefits of Depth in Training Graph Convolutional Networks
Figure 3 for On Provable Benefits of Depth in Training Graph Convolutional Networks
Figure 4 for On Provable Benefits of Depth in Training Graph Convolutional Networks
Viaarxiv icon

Meta-learning with an Adaptive Task Scheduler

Add code
Oct 26, 2021
Figure 1 for Meta-learning with an Adaptive Task Scheduler
Figure 2 for Meta-learning with an Adaptive Task Scheduler
Figure 3 for Meta-learning with an Adaptive Task Scheduler
Figure 4 for Meta-learning with an Adaptive Task Scheduler
Viaarxiv icon

Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time

Add code
Jul 22, 2021
Figure 1 for Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time
Figure 2 for Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time
Viaarxiv icon

Pareto Efficient Fairness in Supervised Learning: From Extraction to Tracing

Add code
Apr 04, 2021
Figure 1 for Pareto Efficient Fairness in Supervised Learning: From Extraction to Tracing
Figure 2 for Pareto Efficient Fairness in Supervised Learning: From Extraction to Tracing
Figure 3 for Pareto Efficient Fairness in Supervised Learning: From Extraction to Tracing
Figure 4 for Pareto Efficient Fairness in Supervised Learning: From Extraction to Tracing
Viaarxiv icon

On the Importance of Sampling in Learning Graph Convolutional Networks

Add code
Mar 03, 2021
Figure 1 for On the Importance of Sampling in Learning Graph Convolutional Networks
Figure 2 for On the Importance of Sampling in Learning Graph Convolutional Networks
Figure 3 for On the Importance of Sampling in Learning Graph Convolutional Networks
Figure 4 for On the Importance of Sampling in Learning Graph Convolutional Networks
Viaarxiv icon