Picture for Yani Ioannou

Yani Ioannou

Meta-GCN: A Dynamically Weighted Loss Minimization Method for Dealing with the Data Imbalance in Graph Neural Networks

Add code
Jun 24, 2024
Viaarxiv icon

Dynamic Sparse Training with Structured Sparsity

Add code
May 03, 2023
Figure 1 for Dynamic Sparse Training with Structured Sparsity
Figure 2 for Dynamic Sparse Training with Structured Sparsity
Figure 3 for Dynamic Sparse Training with Structured Sparsity
Figure 4 for Dynamic Sparse Training with Structured Sparsity
Viaarxiv icon

Bounding generalization error with input compression: An empirical study with infinite-width networks

Add code
Jul 19, 2022
Figure 1 for Bounding generalization error with input compression: An empirical study with infinite-width networks
Figure 2 for Bounding generalization error with input compression: An empirical study with infinite-width networks
Figure 3 for Bounding generalization error with input compression: An empirical study with infinite-width networks
Figure 4 for Bounding generalization error with input compression: An empirical study with infinite-width networks
Viaarxiv icon

Monitoring Shortcut Learning using Mutual Information

Add code
Jun 27, 2022
Figure 1 for Monitoring Shortcut Learning using Mutual Information
Figure 2 for Monitoring Shortcut Learning using Mutual Information
Figure 3 for Monitoring Shortcut Learning using Mutual Information
Figure 4 for Monitoring Shortcut Learning using Mutual Information
Viaarxiv icon

Measuring Neural Net Robustness with Constraints

Add code
Jun 16, 2017
Figure 1 for Measuring Neural Net Robustness with Constraints
Figure 2 for Measuring Neural Net Robustness with Constraints
Figure 3 for Measuring Neural Net Robustness with Constraints
Figure 4 for Measuring Neural Net Robustness with Constraints
Viaarxiv icon

Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups

Add code
Nov 30, 2016
Figure 1 for Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups
Figure 2 for Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups
Figure 3 for Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups
Figure 4 for Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups
Viaarxiv icon

Refining Architectures of Deep Convolutional Neural Networks

Add code
Apr 22, 2016
Figure 1 for Refining Architectures of Deep Convolutional Neural Networks
Figure 2 for Refining Architectures of Deep Convolutional Neural Networks
Figure 3 for Refining Architectures of Deep Convolutional Neural Networks
Figure 4 for Refining Architectures of Deep Convolutional Neural Networks
Viaarxiv icon

Decision Forests, Convolutional Networks and the Models in-Between

Add code
Mar 03, 2016
Figure 1 for Decision Forests, Convolutional Networks and the Models in-Between
Figure 2 for Decision Forests, Convolutional Networks and the Models in-Between
Figure 3 for Decision Forests, Convolutional Networks and the Models in-Between
Figure 4 for Decision Forests, Convolutional Networks and the Models in-Between
Viaarxiv icon

Training CNNs with Low-Rank Filters for Efficient Image Classification

Add code
Feb 07, 2016
Figure 1 for Training CNNs with Low-Rank Filters for Efficient Image Classification
Figure 2 for Training CNNs with Low-Rank Filters for Efficient Image Classification
Figure 3 for Training CNNs with Low-Rank Filters for Efficient Image Classification
Figure 4 for Training CNNs with Low-Rank Filters for Efficient Image Classification
Viaarxiv icon

Difference of Normals as a Multi-Scale Operator in Unorganized Point Clouds

Add code
Sep 08, 2012
Figure 1 for Difference of Normals as a Multi-Scale Operator in Unorganized Point Clouds
Figure 2 for Difference of Normals as a Multi-Scale Operator in Unorganized Point Clouds
Figure 3 for Difference of Normals as a Multi-Scale Operator in Unorganized Point Clouds
Figure 4 for Difference of Normals as a Multi-Scale Operator in Unorganized Point Clouds
Viaarxiv icon