Alert button
Picture for Rodolphe Jenatton

Rodolphe Jenatton

Alert button

Pi-DUAL: Using Privileged Information to Distinguish Clean from Noisy Labels

Oct 10, 2023
Ke Wang, Guillermo Ortiz-Jimenez, Rodolphe Jenatton, Mark Collier, Efi Kokiopoulou, Pascal Frossard

Figure 1 for Pi-DUAL: Using Privileged Information to Distinguish Clean from Noisy Labels
Figure 2 for Pi-DUAL: Using Privileged Information to Distinguish Clean from Noisy Labels
Figure 3 for Pi-DUAL: Using Privileged Information to Distinguish Clean from Noisy Labels
Figure 4 for Pi-DUAL: Using Privileged Information to Distinguish Clean from Noisy Labels
Viaarxiv icon

Three Towers: Flexible Contrastive Learning with Pretrained Image Models

May 29, 2023
Jannik Kossen, Mark Collier, Basil Mustafa, Xiao Wang, Xiaohua Zhai, Lucas Beyer, Andreas Steiner, Jesse Berent, Rodolphe Jenatton, Efi Kokiopoulou

Figure 1 for Three Towers: Flexible Contrastive Learning with Pretrained Image Models
Figure 2 for Three Towers: Flexible Contrastive Learning with Pretrained Image Models
Figure 3 for Three Towers: Flexible Contrastive Learning with Pretrained Image Models
Figure 4 for Three Towers: Flexible Contrastive Learning with Pretrained Image Models
Viaarxiv icon

When does Privileged Information Explain Away Label Noise?

Mar 03, 2023
Guillermo Ortiz-Jimenez, Mark Collier, Anant Nawalgaria, Alexander D'Amour, Jesse Berent, Rodolphe Jenatton, Effrosyni Kokiopoulou

Figure 1 for When does Privileged Information Explain Away Label Noise?
Figure 2 for When does Privileged Information Explain Away Label Noise?
Figure 3 for When does Privileged Information Explain Away Label Noise?
Figure 4 for When does Privileged Information Explain Away Label Noise?
Viaarxiv icon

Scaling Vision Transformers to 22 Billion Parameters

Feb 10, 2023
Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, Rodolphe Jenatton, Lucas Beyer, Michael Tschannen, Anurag Arnab, Xiao Wang, Carlos Riquelme, Matthias Minderer, Joan Puigcerver, Utku Evci, Manoj Kumar, Sjoerd van Steenkiste, Gamaleldin F. Elsayed, Aravindh Mahendran, Fisher Yu, Avital Oliver, Fantine Huot, Jasmijn Bastings, Mark Patrick Collier, Alexey Gritsenko, Vighnesh Birodkar, Cristina Vasconcelos, Yi Tay, Thomas Mensink, Alexander Kolesnikov, Filip Pavetić, Dustin Tran, Thomas Kipf, Mario Lučić, Xiaohua Zhai, Daniel Keysers, Jeremiah Harmsen, Neil Houlsby

Figure 1 for Scaling Vision Transformers to 22 Billion Parameters
Figure 2 for Scaling Vision Transformers to 22 Billion Parameters
Figure 3 for Scaling Vision Transformers to 22 Billion Parameters
Figure 4 for Scaling Vision Transformers to 22 Billion Parameters
Viaarxiv icon

Massively Scaling Heteroscedastic Classifiers

Jan 30, 2023
Mark Collier, Rodolphe Jenatton, Basil Mustafa, Neil Houlsby, Jesse Berent, Effrosyni Kokiopoulou

Figure 1 for Massively Scaling Heteroscedastic Classifiers
Figure 2 for Massively Scaling Heteroscedastic Classifiers
Figure 3 for Massively Scaling Heteroscedastic Classifiers
Figure 4 for Massively Scaling Heteroscedastic Classifiers
Viaarxiv icon

On the Adversarial Robustness of Mixture of Experts

Oct 19, 2022
Joan Puigcerver, Rodolphe Jenatton, Carlos Riquelme, Pranjal Awasthi, Srinadh Bhojanapalli

Figure 1 for On the Adversarial Robustness of Mixture of Experts
Figure 2 for On the Adversarial Robustness of Mixture of Experts
Figure 3 for On the Adversarial Robustness of Mixture of Experts
Figure 4 for On the Adversarial Robustness of Mixture of Experts
Viaarxiv icon

Plex: Towards Reliability using Pretrained Large Model Extensions

Jul 15, 2022
Dustin Tran, Jeremiah Liu, Michael W. Dusenberry, Du Phan, Mark Collier, Jie Ren, Kehang Han, Zi Wang, Zelda Mariet, Huiyi Hu, Neil Band, Tim G. J. Rudner, Karan Singhal, Zachary Nado, Joost van Amersfoort, Andreas Kirsch, Rodolphe Jenatton, Nithum Thain, Honglin Yuan, Kelly Buchanan, Kevin Murphy, D. Sculley, Yarin Gal, Zoubin Ghahramani, Jasper Snoek, Balaji Lakshminarayanan

Figure 1 for Plex: Towards Reliability using Pretrained Large Model Extensions
Figure 2 for Plex: Towards Reliability using Pretrained Large Model Extensions
Figure 3 for Plex: Towards Reliability using Pretrained Large Model Extensions
Figure 4 for Plex: Towards Reliability using Pretrained Large Model Extensions
Viaarxiv icon

Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts

Jun 06, 2022
Basil Mustafa, Carlos Riquelme, Joan Puigcerver, Rodolphe Jenatton, Neil Houlsby

Figure 1 for Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Figure 2 for Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Figure 3 for Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Figure 4 for Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Viaarxiv icon

Transfer and Marginalize: Explaining Away Label Noise with Privileged Information

Feb 18, 2022
Mark Collier, Rodolphe Jenatton, Efi Kokiopoulou, Jesse Berent

Figure 1 for Transfer and Marginalize: Explaining Away Label Noise with Privileged Information
Figure 2 for Transfer and Marginalize: Explaining Away Label Noise with Privileged Information
Figure 3 for Transfer and Marginalize: Explaining Away Label Noise with Privileged Information
Figure 4 for Transfer and Marginalize: Explaining Away Label Noise with Privileged Information
Viaarxiv icon