Alert button
Picture for Enrico Palumbo

Enrico Palumbo

Alert button

Towards Graph Foundation Models for Personalization

Add code
Bookmark button
Alert button
Mar 12, 2024
Andreas Damianou, Francesco Fabbri, Paul Gigioli, Marco De Nadai, Alice Wang, Enrico Palumbo, Mounia Lalmas

Figure 1 for Towards Graph Foundation Models for Personalization
Figure 2 for Towards Graph Foundation Models for Personalization
Figure 3 for Towards Graph Foundation Models for Personalization
Figure 4 for Towards Graph Foundation Models for Personalization
Viaarxiv icon

Improving Content Retrievability in Search with Controllable Query Generation

Add code
Bookmark button
Alert button
Mar 21, 2023
Gustavo Penha, Enrico Palumbo, Maryam Aziz, Alice Wang, Hugues Bouchard

Figure 1 for Improving Content Retrievability in Search with Controllable Query Generation
Figure 2 for Improving Content Retrievability in Search with Controllable Query Generation
Figure 3 for Improving Content Retrievability in Search with Controllable Query Generation
Figure 4 for Improving Content Retrievability in Search with Controllable Query Generation
Viaarxiv icon

Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems

Add code
Bookmark button
Alert button
Jun 15, 2022
Jack FitzGerald, Shankar Ananthakrishnan, Konstantine Arkoudas, Davide Bernardi, Abhishek Bhagia, Claudio Delli Bovi, Jin Cao, Rakesh Chada, Amit Chauhan, Luoxin Chen, Anurag Dwarakanath, Satyam Dwivedi, Turan Gojayev, Karthik Gopalakrishnan, Thomas Gueudre, Dilek Hakkani-Tur, Wael Hamza, Jonathan Hueser, Kevin Martin Jose, Haidar Khan, Beiye Liu, Jianhua Lu, Alessandro Manzotti, Pradeep Natarajan, Karolina Owczarzak, Gokmen Oz, Enrico Palumbo, Charith Peris, Chandana Satya Prakash, Stephen Rawls, Andy Rosenbaum, Anjali Shenoy, Saleh Soltan, Mukund Harakere Sridhar, Liz Tan, Fabian Triefenbach, Pan Wei, Haiyang Yu, Shuai Zheng, Gokhan Tur, Prem Natarajan

Figure 1 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 2 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 3 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 4 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Viaarxiv icon