Picture for Stefanie Jegelka

Stefanie Jegelka

Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA

The Heterophilic Graph Learning Handbook: Benchmarks, Models, Theoretical Analysis, Applications and Challenges

Add code
Jul 12, 2024
Figure 1 for The Heterophilic Graph Learning Handbook: Benchmarks, Models, Theoretical Analysis, Applications and Challenges
Figure 2 for The Heterophilic Graph Learning Handbook: Benchmarks, Models, Theoretical Analysis, Applications and Challenges
Figure 3 for The Heterophilic Graph Learning Handbook: Benchmarks, Models, Theoretical Analysis, Applications and Challenges
Figure 4 for The Heterophilic Graph Learning Handbook: Benchmarks, Models, Theoretical Analysis, Applications and Challenges
Viaarxiv icon

A Universal Class of Sharpness-Aware Minimization Algorithms

Add code
Jun 06, 2024
Viaarxiv icon

The Empirical Impact of Neural Parameter Symmetries, or Lack Thereof

Add code
May 30, 2024
Figure 1 for The Empirical Impact of Neural Parameter Symmetries, or Lack Thereof
Figure 2 for The Empirical Impact of Neural Parameter Symmetries, or Lack Thereof
Figure 3 for The Empirical Impact of Neural Parameter Symmetries, or Lack Thereof
Figure 4 for The Empirical Impact of Neural Parameter Symmetries, or Lack Thereof
Viaarxiv icon

A Canonization Perspective on Invariant and Equivariant Learning

Add code
May 29, 2024
Figure 1 for A Canonization Perspective on Invariant and Equivariant Learning
Figure 2 for A Canonization Perspective on Invariant and Equivariant Learning
Figure 3 for A Canonization Perspective on Invariant and Equivariant Learning
Figure 4 for A Canonization Perspective on Invariant and Equivariant Learning
Viaarxiv icon

On the Role of Attention Masks and LayerNorm in Transformers

Add code
May 29, 2024
Figure 1 for On the Role of Attention Masks and LayerNorm in Transformers
Figure 2 for On the Role of Attention Masks and LayerNorm in Transformers
Figure 3 for On the Role of Attention Masks and LayerNorm in Transformers
Figure 4 for On the Role of Attention Masks and LayerNorm in Transformers
Viaarxiv icon

A Theoretical Understanding of Self-Correction through In-context Alignment

Add code
May 28, 2024
Figure 1 for A Theoretical Understanding of Self-Correction through In-context Alignment
Figure 2 for A Theoretical Understanding of Self-Correction through In-context Alignment
Figure 3 for A Theoretical Understanding of Self-Correction through In-context Alignment
Figure 4 for A Theoretical Understanding of Self-Correction through In-context Alignment
Viaarxiv icon

In-Context Symmetries: Self-Supervised Learning through Contextual World Models

Add code
May 28, 2024
Viaarxiv icon

Future Directions in Foundations of Graph Machine Learning

Add code
Feb 03, 2024
Figure 1 for Future Directions in Foundations of Graph Machine Learning
Figure 2 for Future Directions in Foundations of Graph Machine Learning
Viaarxiv icon

On the hardness of learning under symmetries

Add code
Jan 03, 2024
Figure 1 for On the hardness of learning under symmetries
Figure 2 for On the hardness of learning under symmetries
Figure 3 for On the hardness of learning under symmetries
Figure 4 for On the hardness of learning under symmetries
Viaarxiv icon

Expressive Sign Equivariant Networks for Spectral Geometric Learning

Add code
Dec 04, 2023
Viaarxiv icon