Alert button
Picture for Nathan Hubens

Nathan Hubens

Alert button

A Recipe for Efficient SBIR Models: Combining Relative Triplet Loss with Batch Normalization and Knowledge Distillation

Add code
Bookmark button
Alert button
May 30, 2023
Omar Seddati, Nathan Hubens, Stéphane Dupont, Thierry Dutoit

Figure 1 for A Recipe for Efficient SBIR Models: Combining Relative Triplet Loss with Batch Normalization and Knowledge Distillation
Figure 2 for A Recipe for Efficient SBIR Models: Combining Relative Triplet Loss with Batch Normalization and Knowledge Distillation
Figure 3 for A Recipe for Efficient SBIR Models: Combining Relative Triplet Loss with Batch Normalization and Knowledge Distillation
Figure 4 for A Recipe for Efficient SBIR Models: Combining Relative Triplet Loss with Batch Normalization and Knowledge Distillation
Viaarxiv icon

Induced Feature Selection by Structured Pruning

Add code
Bookmark button
Alert button
Mar 20, 2023
Nathan Hubens, Victor Delvigne, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia

Figure 1 for Induced Feature Selection by Structured Pruning
Figure 2 for Induced Feature Selection by Structured Pruning
Figure 3 for Induced Feature Selection by Structured Pruning
Figure 4 for Induced Feature Selection by Structured Pruning
Viaarxiv icon

FasterAI: A Lightweight Library for Creating Sparse Neural Networks

Add code
Bookmark button
Alert button
Jul 03, 2022
Nathan Hubens

Figure 1 for FasterAI: A Lightweight Library for Creating Sparse Neural Networks
Figure 2 for FasterAI: A Lightweight Library for Creating Sparse Neural Networks
Figure 3 for FasterAI: A Lightweight Library for Creating Sparse Neural Networks
Figure 4 for FasterAI: A Lightweight Library for Creating Sparse Neural Networks
Viaarxiv icon

Improve Convolutional Neural Network Pruning by Maximizing Filter Variety

Add code
Bookmark button
Alert button
Mar 11, 2022
Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia

Figure 1 for Improve Convolutional Neural Network Pruning by Maximizing Filter Variety
Figure 2 for Improve Convolutional Neural Network Pruning by Maximizing Filter Variety
Figure 3 for Improve Convolutional Neural Network Pruning by Maximizing Filter Variety
Figure 4 for Improve Convolutional Neural Network Pruning by Maximizing Filter Variety
Viaarxiv icon

Towards Lightweight Neural Animation : Exploration of Neural Network Pruning in Mixture of Experts-based Animation Models

Add code
Bookmark button
Alert button
Jan 24, 2022
Antoine Maiorca, Nathan Hubens, Sohaib Laraba, Thierry Dutoit

Figure 1 for Towards Lightweight Neural Animation : Exploration of Neural Network Pruning in Mixture of Experts-based Animation Models
Figure 2 for Towards Lightweight Neural Animation : Exploration of Neural Network Pruning in Mixture of Experts-based Animation Models
Figure 3 for Towards Lightweight Neural Animation : Exploration of Neural Network Pruning in Mixture of Experts-based Animation Models
Figure 4 for Towards Lightweight Neural Animation : Exploration of Neural Network Pruning in Mixture of Experts-based Animation Models
Viaarxiv icon

Where Is My Mind (looking at)? Predicting Visual Attention from Brain Activity

Add code
Bookmark button
Alert button
Jan 11, 2022
Victor Delvigne, Noé Tits, Luca La Fisca, Nathan Hubens, Antoine Maiorca, Hazem Wannous, Thierry Dutoit, Jean-Philippe Vandeborre

Figure 1 for Where Is My Mind (looking at)? Predicting Visual Attention from Brain Activity
Figure 2 for Where Is My Mind (looking at)? Predicting Visual Attention from Brain Activity
Figure 3 for Where Is My Mind (looking at)? Predicting Visual Attention from Brain Activity
Figure 4 for Where Is My Mind (looking at)? Predicting Visual Attention from Brain Activity
Viaarxiv icon

An Experimental Study of the Impact of Pre-training on the Pruning of a Convolutional Neural Network

Add code
Bookmark button
Alert button
Dec 15, 2021
Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia

Figure 1 for An Experimental Study of the Impact of Pre-training on the Pruning of a Convolutional Neural Network
Figure 2 for An Experimental Study of the Impact of Pre-training on the Pruning of a Convolutional Neural Network
Figure 3 for An Experimental Study of the Impact of Pre-training on the Pruning of a Convolutional Neural Network
Figure 4 for An Experimental Study of the Impact of Pre-training on the Pruning of a Convolutional Neural Network
Viaarxiv icon

One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget

Add code
Bookmark button
Alert button
Jul 05, 2021
Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia

Figure 1 for One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget
Figure 2 for One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget
Figure 3 for One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget
Figure 4 for One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget
Viaarxiv icon

Modulated Self-attention Convolutional Network for VQA

Add code
Bookmark button
Alert button
Oct 31, 2019
Jean-Benoit Delbrouck, Antoine Maiorca, Nathan Hubens, Stéphane Dupont

Figure 1 for Modulated Self-attention Convolutional Network for VQA
Figure 2 for Modulated Self-attention Convolutional Network for VQA
Viaarxiv icon