Picture for Mehrdad Mahdavi

Mehrdad Mahdavi

Michigan State University

Mixture Weight Estimation and Model Prediction in Multi-source Multi-target Domain Adaptation

Add code
Sep 19, 2023
Figure 1 for Mixture Weight Estimation and Model Prediction in Multi-source Multi-target Domain Adaptation
Figure 2 for Mixture Weight Estimation and Model Prediction in Multi-source Multi-target Domain Adaptation
Viaarxiv icon

On the Hardness of Robustness Transfer: A Perspective from Rademacher Complexity over Symmetric Difference Hypothesis Space

Add code
Feb 23, 2023
Figure 1 for On the Hardness of Robustness Transfer: A Perspective from Rademacher Complexity over Symmetric Difference Hypothesis Space
Figure 2 for On the Hardness of Robustness Transfer: A Perspective from Rademacher Complexity over Symmetric Difference Hypothesis Space
Figure 3 for On the Hardness of Robustness Transfer: A Perspective from Rademacher Complexity over Symmetric Difference Hypothesis Space
Viaarxiv icon

Do We Really Need Complicated Model Architectures For Temporal Networks?

Add code
Feb 22, 2023
Figure 1 for Do We Really Need Complicated Model Architectures For Temporal Networks?
Figure 2 for Do We Really Need Complicated Model Architectures For Temporal Networks?
Figure 3 for Do We Really Need Complicated Model Architectures For Temporal Networks?
Figure 4 for Do We Really Need Complicated Model Architectures For Temporal Networks?
Viaarxiv icon

Efficiently Forgetting What You Have Learned in Graph Representation Learning via Projection

Add code
Feb 17, 2023
Viaarxiv icon

Tight Analysis of Extra-gradient and Optimistic Gradient Methods For Nonconvex Minimax Problems

Add code
Oct 17, 2022
Figure 1 for Tight Analysis of Extra-gradient and Optimistic Gradient Methods For Nonconvex Minimax Problems
Figure 2 for Tight Analysis of Extra-gradient and Optimistic Gradient Methods For Nonconvex Minimax Problems
Viaarxiv icon

Learning Distributionally Robust Models at Scale via Composite Optimization

Add code
Mar 17, 2022
Figure 1 for Learning Distributionally Robust Models at Scale via Composite Optimization
Figure 2 for Learning Distributionally Robust Models at Scale via Composite Optimization
Viaarxiv icon

Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks

Add code
Dec 07, 2021
Figure 1 for Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks
Figure 2 for Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks
Figure 3 for Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks
Figure 4 for Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks
Viaarxiv icon

Dynamic Graph Representation Learning via Graph Transformer Networks

Add code
Nov 19, 2021
Figure 1 for Dynamic Graph Representation Learning via Graph Transformer Networks
Figure 2 for Dynamic Graph Representation Learning via Graph Transformer Networks
Figure 3 for Dynamic Graph Representation Learning via Graph Transformer Networks
Figure 4 for Dynamic Graph Representation Learning via Graph Transformer Networks
Viaarxiv icon

On Provable Benefits of Depth in Training Graph Convolutional Networks

Add code
Oct 28, 2021
Figure 1 for On Provable Benefits of Depth in Training Graph Convolutional Networks
Figure 2 for On Provable Benefits of Depth in Training Graph Convolutional Networks
Figure 3 for On Provable Benefits of Depth in Training Graph Convolutional Networks
Figure 4 for On Provable Benefits of Depth in Training Graph Convolutional Networks
Viaarxiv icon

Meta-learning with an Adaptive Task Scheduler

Add code
Oct 26, 2021
Figure 1 for Meta-learning with an Adaptive Task Scheduler
Figure 2 for Meta-learning with an Adaptive Task Scheduler
Figure 3 for Meta-learning with an Adaptive Task Scheduler
Figure 4 for Meta-learning with an Adaptive Task Scheduler
Viaarxiv icon