Picture for Adrian Bulat

Adrian Bulat

SAIC_Cambridge-HuPBA-FBK Submission to the EPIC-Kitchens-100 Action Recognition Challenge 2021

Add code
Oct 06, 2021
Figure 1 for SAIC_Cambridge-HuPBA-FBK Submission to the EPIC-Kitchens-100 Action Recognition Challenge 2021
Viaarxiv icon

Space-time Mixing Attention for Video Transformer

Add code
Jun 11, 2021
Figure 1 for Space-time Mixing Attention for Video Transformer
Figure 2 for Space-time Mixing Attention for Video Transformer
Figure 3 for Space-time Mixing Attention for Video Transformer
Figure 4 for Space-time Mixing Attention for Video Transformer
Viaarxiv icon

Bit-Mixer: Mixed-precision networks with runtime bit-width selection

Add code
Mar 31, 2021
Figure 1 for Bit-Mixer: Mixed-precision networks with runtime bit-width selection
Figure 2 for Bit-Mixer: Mixed-precision networks with runtime bit-width selection
Figure 3 for Bit-Mixer: Mixed-precision networks with runtime bit-width selection
Figure 4 for Bit-Mixer: Mixed-precision networks with runtime bit-width selection
Viaarxiv icon

Pre-training strategies and datasets for facial representation learning

Add code
Mar 30, 2021
Figure 1 for Pre-training strategies and datasets for facial representation learning
Figure 2 for Pre-training strategies and datasets for facial representation learning
Figure 3 for Pre-training strategies and datasets for facial representation learning
Figure 4 for Pre-training strategies and datasets for facial representation learning
Viaarxiv icon

Improving memory banks for unsupervised learning with large mini-batch, consistency and hard negative mining

Add code
Feb 08, 2021
Figure 1 for Improving memory banks for unsupervised learning with large mini-batch, consistency and hard negative mining
Figure 2 for Improving memory banks for unsupervised learning with large mini-batch, consistency and hard negative mining
Figure 3 for Improving memory banks for unsupervised learning with large mini-batch, consistency and hard negative mining
Figure 4 for Improving memory banks for unsupervised learning with large mini-batch, consistency and hard negative mining
Viaarxiv icon

Semi-supervised Facial Action Unit Intensity Estimation with Contrastive Learning

Add code
Nov 04, 2020
Figure 1 for Semi-supervised Facial Action Unit Intensity Estimation with Contrastive Learning
Figure 2 for Semi-supervised Facial Action Unit Intensity Estimation with Contrastive Learning
Figure 3 for Semi-supervised Facial Action Unit Intensity Estimation with Contrastive Learning
Figure 4 for Semi-supervised Facial Action Unit Intensity Estimation with Contrastive Learning
Viaarxiv icon

High-Capacity Expert Binary Networks

Add code
Oct 07, 2020
Figure 1 for High-Capacity Expert Binary Networks
Figure 2 for High-Capacity Expert Binary Networks
Figure 3 for High-Capacity Expert Binary Networks
Figure 4 for High-Capacity Expert Binary Networks
Viaarxiv icon

A Transfer Learning approach to Heatmap Regression for Action Unit intensity estimation

Add code
Apr 14, 2020
Figure 1 for A Transfer Learning approach to Heatmap Regression for Action Unit intensity estimation
Figure 2 for A Transfer Learning approach to Heatmap Regression for Action Unit intensity estimation
Figure 3 for A Transfer Learning approach to Heatmap Regression for Action Unit intensity estimation
Figure 4 for A Transfer Learning approach to Heatmap Regression for Action Unit intensity estimation
Viaarxiv icon

Training Binary Neural Networks with Real-to-Binary Convolutions

Add code
Mar 25, 2020
Figure 1 for Training Binary Neural Networks with Real-to-Binary Convolutions
Figure 2 for Training Binary Neural Networks with Real-to-Binary Convolutions
Figure 3 for Training Binary Neural Networks with Real-to-Binary Convolutions
Figure 4 for Training Binary Neural Networks with Real-to-Binary Convolutions
Viaarxiv icon

Knowledge distillation via adaptive instance normalization

Add code
Mar 09, 2020
Figure 1 for Knowledge distillation via adaptive instance normalization
Figure 2 for Knowledge distillation via adaptive instance normalization
Figure 3 for Knowledge distillation via adaptive instance normalization
Figure 4 for Knowledge distillation via adaptive instance normalization
Viaarxiv icon