Alert button
Picture for Luca Di Liello

Luca Di Liello

Alert button

Structural Self-Supervised Objectives for Transformers

Add code
Bookmark button
Alert button
Sep 15, 2023
Luca Di Liello

Viaarxiv icon

Context-Aware Transformer Pre-Training for Answer Sentence Selection

Add code
Bookmark button
Alert button
May 24, 2023
Luca Di Liello, Siddhant Garg, Alessandro Moschitti

Figure 1 for Context-Aware Transformer Pre-Training for Answer Sentence Selection
Figure 2 for Context-Aware Transformer Pre-Training for Answer Sentence Selection
Figure 3 for Context-Aware Transformer Pre-Training for Answer Sentence Selection
Figure 4 for Context-Aware Transformer Pre-Training for Answer Sentence Selection
Viaarxiv icon

Effective Pre-Training Objectives for Transformer-based Autoencoders

Add code
Bookmark button
Alert button
Oct 24, 2022
Luca Di Liello, Matteo Gabburo, Alessandro Moschitti

Figure 1 for Effective Pre-Training Objectives for Transformer-based Autoencoders
Figure 2 for Effective Pre-Training Objectives for Transformer-based Autoencoders
Figure 3 for Effective Pre-Training Objectives for Transformer-based Autoencoders
Figure 4 for Effective Pre-Training Objectives for Transformer-based Autoencoders
Viaarxiv icon

Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection

Add code
Bookmark button
Alert button
May 20, 2022
Luca Di Liello, Siddhant Garg, Luca Soldaini, Alessandro Moschitti

Figure 1 for Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
Figure 2 for Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
Figure 3 for Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
Figure 4 for Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
Viaarxiv icon

Paragraph-based Transformer Pre-training for Multi-Sentence Inference

Add code
Bookmark button
Alert button
May 02, 2022
Luca Di Liello, Siddhant Garg, Luca Soldaini, Alessandro Moschitti

Figure 1 for Paragraph-based Transformer Pre-training for Multi-Sentence Inference
Figure 2 for Paragraph-based Transformer Pre-training for Multi-Sentence Inference
Figure 3 for Paragraph-based Transformer Pre-training for Multi-Sentence Inference
Figure 4 for Paragraph-based Transformer Pre-training for Multi-Sentence Inference
Viaarxiv icon

Efficient pre-training objectives for Transformers

Add code
Bookmark button
Alert button
Apr 20, 2021
Luca Di Liello, Matteo Gabburo, Alessandro Moschitti

Figure 1 for Efficient pre-training objectives for Transformers
Figure 2 for Efficient pre-training objectives for Transformers
Figure 3 for Efficient pre-training objectives for Transformers
Figure 4 for Efficient pre-training objectives for Transformers
Viaarxiv icon

Efficient Generation of Structured Objects with Constrained Adversarial Networks

Add code
Bookmark button
Alert button
Jul 26, 2020
Luca Di Liello, Pierfrancesco Ardino, Jacopo Gobbi, Paolo Morettin, Stefano Teso, Andrea Passerini

Figure 1 for Efficient Generation of Structured Objects with Constrained Adversarial Networks
Figure 2 for Efficient Generation of Structured Objects with Constrained Adversarial Networks
Figure 3 for Efficient Generation of Structured Objects with Constrained Adversarial Networks
Figure 4 for Efficient Generation of Structured Objects with Constrained Adversarial Networks
Viaarxiv icon