Picture for Andreas Loukas

Andreas Loukas

SPECTRE : Spectral Conditioning Helps to Overcome the Expressivity Limits of One-shot Graph Generators

Add code
Apr 04, 2022
Figure 1 for SPECTRE : Spectral Conditioning Helps to Overcome the Expressivity Limits of One-shot Graph Generators
Figure 2 for SPECTRE : Spectral Conditioning Helps to Overcome the Expressivity Limits of One-shot Graph Generators
Figure 3 for SPECTRE : Spectral Conditioning Helps to Overcome the Expressivity Limits of One-shot Graph Generators
Figure 4 for SPECTRE : Spectral Conditioning Helps to Overcome the Expressivity Limits of One-shot Graph Generators
Viaarxiv icon

SQALER: Scaling Question Answering by Decoupling Multi-Hop and Logical Reasoning

Add code
Oct 27, 2021
Figure 1 for SQALER: Scaling Question Answering by Decoupling Multi-Hop and Logical Reasoning
Figure 2 for SQALER: Scaling Question Answering by Decoupling Multi-Hop and Logical Reasoning
Figure 3 for SQALER: Scaling Question Answering by Decoupling Multi-Hop and Logical Reasoning
Figure 4 for SQALER: Scaling Question Answering by Decoupling Multi-Hop and Logical Reasoning
Viaarxiv icon

Partition and Code: learning how to compress graphs

Add code
Jul 05, 2021
Figure 1 for Partition and Code: learning how to compress graphs
Figure 2 for Partition and Code: learning how to compress graphs
Figure 3 for Partition and Code: learning how to compress graphs
Figure 4 for Partition and Code: learning how to compress graphs
Viaarxiv icon

What training reveals about neural network complexity

Add code
Jun 08, 2021
Figure 1 for What training reveals about neural network complexity
Figure 2 for What training reveals about neural network complexity
Figure 3 for What training reveals about neural network complexity
Figure 4 for What training reveals about neural network complexity
Viaarxiv icon

Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth

Add code
Mar 05, 2021
Figure 1 for Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth
Figure 2 for Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth
Figure 3 for Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth
Figure 4 for Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth
Viaarxiv icon

Building powerful and equivariant graph neural networks with structural message-passing

Add code
Jul 11, 2020
Figure 1 for Building powerful and equivariant graph neural networks with structural message-passing
Figure 2 for Building powerful and equivariant graph neural networks with structural message-passing
Figure 3 for Building powerful and equivariant graph neural networks with structural message-passing
Figure 4 for Building powerful and equivariant graph neural networks with structural message-passing
Viaarxiv icon

Multi-Head Attention: Collaborate Instead of Concatenate

Add code
Jun 29, 2020
Figure 1 for Multi-Head Attention: Collaborate Instead of Concatenate
Figure 2 for Multi-Head Attention: Collaborate Instead of Concatenate
Figure 3 for Multi-Head Attention: Collaborate Instead of Concatenate
Figure 4 for Multi-Head Attention: Collaborate Instead of Concatenate
Viaarxiv icon

Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs

Add code
Jun 29, 2020
Figure 1 for Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs
Figure 2 for Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs
Figure 3 for Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs
Figure 4 for Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs
Viaarxiv icon

How hard is graph isomorphism for graph neural networks?

Add code
May 13, 2020
Figure 1 for How hard is graph isomorphism for graph neural networks?
Figure 2 for How hard is graph isomorphism for graph neural networks?
Figure 3 for How hard is graph isomorphism for graph neural networks?
Figure 4 for How hard is graph isomorphism for graph neural networks?
Viaarxiv icon

On the Relationship between Self-Attention and Convolutional Layers

Add code
Nov 08, 2019
Figure 1 for On the Relationship between Self-Attention and Convolutional Layers
Figure 2 for On the Relationship between Self-Attention and Convolutional Layers
Figure 3 for On the Relationship between Self-Attention and Convolutional Layers
Figure 4 for On the Relationship between Self-Attention and Convolutional Layers
Viaarxiv icon