Alert button
Picture for Giuseppe Carenini

Giuseppe Carenini

Alert button

University of British Columbia

Human Interpretation and Exploitation of Self-attention Patterns in Transformers: A Case Study in Extractive Summarization

Add code
Bookmark button
Alert button
Dec 10, 2021
Raymond Li, Wen Xiao, Lanjun Wang, Giuseppe Carenini

Figure 1 for Human Interpretation and Exploitation of Self-attention Patterns in Transformers: A Case Study in Extractive Summarization
Figure 2 for Human Interpretation and Exploitation of Self-attention Patterns in Transformers: A Case Study in Extractive Summarization
Figure 3 for Human Interpretation and Exploitation of Self-attention Patterns in Transformers: A Case Study in Extractive Summarization
Figure 4 for Human Interpretation and Exploitation of Self-attention Patterns in Transformers: A Case Study in Extractive Summarization
Viaarxiv icon

PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization

Add code
Bookmark button
Alert button
Oct 16, 2021
Wen Xiao, Iz Beltagy, Giuseppe Carenini, Arman Cohan

Figure 1 for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization
Figure 2 for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization
Figure 3 for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization
Figure 4 for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization
Viaarxiv icon

T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP

Add code
Bookmark button
Alert button
Aug 31, 2021
Raymond Li, Wen Xiao, Lanjun Wang, Hyeju Jang, Giuseppe Carenini

Figure 1 for T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP
Figure 2 for T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP
Figure 3 for T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP
Figure 4 for T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP
Viaarxiv icon

ConVIScope: Visual Analytics for Exploring Patient Conversations

Add code
Bookmark button
Alert button
Aug 30, 2021
Raymond Li, Enamul Hoque, Giuseppe Carenini, Richard Lester, Raymond Chau

Figure 1 for ConVIScope: Visual Analytics for Exploring Patient Conversations
Figure 2 for ConVIScope: Visual Analytics for Exploring Patient Conversations
Viaarxiv icon

Improving Unsupervised Dialogue Topic Segmentation with Utterance-Pair Coherence Scoring

Add code
Bookmark button
Alert button
Jun 12, 2021
Linzi Xing, Giuseppe Carenini

Figure 1 for Improving Unsupervised Dialogue Topic Segmentation with Utterance-Pair Coherence Scoring
Figure 2 for Improving Unsupervised Dialogue Topic Segmentation with Utterance-Pair Coherence Scoring
Figure 3 for Improving Unsupervised Dialogue Topic Segmentation with Utterance-Pair Coherence Scoring
Figure 4 for Improving Unsupervised Dialogue Topic Segmentation with Utterance-Pair Coherence Scoring
Viaarxiv icon

W-RST: Towards a Weighted RST-style Discourse Framework

Add code
Bookmark button
Alert button
Jun 04, 2021
Patrick Huber, Wen Xiao, Giuseppe Carenini

Figure 1 for W-RST: Towards a Weighted RST-style Discourse Framework
Figure 2 for W-RST: Towards a Weighted RST-style Discourse Framework
Figure 3 for W-RST: Towards a Weighted RST-style Discourse Framework
Figure 4 for W-RST: Towards a Weighted RST-style Discourse Framework
Viaarxiv icon

Demoting the Lead Bias in News Summarization via Alternating Adversarial Learning

Add code
Bookmark button
Alert button
May 29, 2021
Linzi Xing, Wen Xiao, Giuseppe Carenini

Figure 1 for Demoting the Lead Bias in News Summarization via Alternating Adversarial Learning
Figure 2 for Demoting the Lead Bias in News Summarization via Alternating Adversarial Learning
Figure 3 for Demoting the Lead Bias in News Summarization via Alternating Adversarial Learning
Figure 4 for Demoting the Lead Bias in News Summarization via Alternating Adversarial Learning
Viaarxiv icon

Predicting Discourse Trees from Transformer-based Neural Summarizers

Add code
Bookmark button
Alert button
Apr 14, 2021
Wen Xiao, Patrick Huber, Giuseppe Carenini

Figure 1 for Predicting Discourse Trees from Transformer-based Neural Summarizers
Figure 2 for Predicting Discourse Trees from Transformer-based Neural Summarizers
Figure 3 for Predicting Discourse Trees from Transformer-based Neural Summarizers
Figure 4 for Predicting Discourse Trees from Transformer-based Neural Summarizers
Viaarxiv icon

Unsupervised Learning of Discourse Structures using a Tree Autoencoder

Add code
Bookmark button
Alert button
Dec 17, 2020
Patrick Huber, Giuseppe Carenini

Figure 1 for Unsupervised Learning of Discourse Structures using a Tree Autoencoder
Figure 2 for Unsupervised Learning of Discourse Structures using a Tree Autoencoder
Figure 3 for Unsupervised Learning of Discourse Structures using a Tree Autoencoder
Figure 4 for Unsupervised Learning of Discourse Structures using a Tree Autoencoder
Viaarxiv icon

Do We Really Need That Many Parameters In Transformer For Extractive Summarization? Discourse Can Help !

Add code
Bookmark button
Alert button
Dec 03, 2020
Wen Xiao, Patrick Huber, Giuseppe Carenini

Figure 1 for Do We Really Need That Many Parameters In Transformer For Extractive Summarization? Discourse Can Help !
Figure 2 for Do We Really Need That Many Parameters In Transformer For Extractive Summarization? Discourse Can Help !
Figure 3 for Do We Really Need That Many Parameters In Transformer For Extractive Summarization? Discourse Can Help !
Figure 4 for Do We Really Need That Many Parameters In Transformer For Extractive Summarization? Discourse Can Help !
Viaarxiv icon