Alert button
Picture for Neil Houlsby

Neil Houlsby

Alert button

Dual PatchNorm

Add code
Bookmark button
Alert button
Feb 06, 2023
Manoj Kumar, Mostafa Dehghani, Neil Houlsby

Figure 1 for Dual PatchNorm
Figure 2 for Dual PatchNorm
Figure 3 for Dual PatchNorm
Figure 4 for Dual PatchNorm
Viaarxiv icon

Adaptive Computation with Elastic Input Sequence

Add code
Bookmark button
Alert button
Jan 30, 2023
Fuzhao Xue, Valerii Likhosherstov, Anurag Arnab, Neil Houlsby, Mostafa Dehghani, Yang You

Figure 1 for Adaptive Computation with Elastic Input Sequence
Figure 2 for Adaptive Computation with Elastic Input Sequence
Figure 3 for Adaptive Computation with Elastic Input Sequence
Figure 4 for Adaptive Computation with Elastic Input Sequence
Viaarxiv icon

Massively Scaling Heteroscedastic Classifiers

Add code
Bookmark button
Alert button
Jan 30, 2023
Mark Collier, Rodolphe Jenatton, Basil Mustafa, Neil Houlsby, Jesse Berent, Effrosyni Kokiopoulou

Figure 1 for Massively Scaling Heteroscedastic Classifiers
Figure 2 for Massively Scaling Heteroscedastic Classifiers
Figure 3 for Massively Scaling Heteroscedastic Classifiers
Figure 4 for Massively Scaling Heteroscedastic Classifiers
Viaarxiv icon

Image-and-Language Understanding from Pixels Only

Add code
Bookmark button
Alert button
Dec 15, 2022
Michael Tschannen, Basil Mustafa, Neil Houlsby

Figure 1 for Image-and-Language Understanding from Pixels Only
Figure 2 for Image-and-Language Understanding from Pixels Only
Figure 3 for Image-and-Language Understanding from Pixels Only
Figure 4 for Image-and-Language Understanding from Pixels Only
Viaarxiv icon

Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints

Add code
Bookmark button
Alert button
Dec 09, 2022
Aran Komatsuzaki, Joan Puigcerver, James Lee-Thorp, Carlos Riquelme Ruiz, Basil Mustafa, Joshua Ainslie, Yi Tay, Mostafa Dehghani, Neil Houlsby

Figure 1 for Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints
Figure 2 for Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints
Figure 3 for Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints
Figure 4 for Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints
Viaarxiv icon

Location-Aware Self-Supervised Transformers

Add code
Bookmark button
Alert button
Dec 05, 2022
Mathilde Caron, Neil Houlsby, Cordelia Schmid

Figure 1 for Location-Aware Self-Supervised Transformers
Figure 2 for Location-Aware Self-Supervised Transformers
Figure 3 for Location-Aware Self-Supervised Transformers
Figure 4 for Location-Aware Self-Supervised Transformers
Viaarxiv icon

Transcending Scaling Laws with 0.1% Extra Compute

Add code
Bookmark button
Alert button
Oct 20, 2022
Yi Tay, Jason Wei, Hyung Won Chung, Vinh Q. Tran, David R. So, Siamak Shakeri, Xavier Garcia, Huaixiu Steven Zheng, Jinfeng Rao, Aakanksha Chowdhery, Denny Zhou, Donald Metzler, Slav Petrov, Neil Houlsby, Quoc V. Le, Mostafa Dehghani

Figure 1 for Transcending Scaling Laws with 0.1% Extra Compute
Figure 2 for Transcending Scaling Laws with 0.1% Extra Compute
Figure 3 for Transcending Scaling Laws with 0.1% Extra Compute
Figure 4 for Transcending Scaling Laws with 0.1% Extra Compute
Viaarxiv icon

PaLI: A Jointly-Scaled Multilingual Language-Image Model

Add code
Bookmark button
Alert button
Sep 16, 2022
Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, Alexander Kolesnikov, Joan Puigcerver, Nan Ding, Keran Rong, Hassan Akbari, Gaurav Mishra, Linting Xue, Ashish Thapliyal, James Bradbury, Weicheng Kuo, Mojtaba Seyedhosseini, Chao Jia, Burcu Karagol Ayan, Carlos Riquelme, Andreas Steiner, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu Soricut

Figure 1 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Figure 2 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Figure 3 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Figure 4 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Viaarxiv icon

Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts

Add code
Bookmark button
Alert button
Jun 06, 2022
Basil Mustafa, Carlos Riquelme, Joan Puigcerver, Rodolphe Jenatton, Neil Houlsby

Figure 1 for Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Figure 2 for Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Figure 3 for Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Figure 4 for Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Viaarxiv icon