Picture for Giuseppe Carenini

Giuseppe Carenini

University of British Columbia

Towards Understanding Large-Scale Discourse Structures in Pre-Trained and Fine-Tuned Language Models

Add code
Apr 08, 2022
Figure 1 for Towards Understanding Large-Scale Discourse Structures in Pre-Trained and Fine-Tuned Language Models
Figure 2 for Towards Understanding Large-Scale Discourse Structures in Pre-Trained and Fine-Tuned Language Models
Figure 3 for Towards Understanding Large-Scale Discourse Structures in Pre-Trained and Fine-Tuned Language Models
Figure 4 for Towards Understanding Large-Scale Discourse Structures in Pre-Trained and Fine-Tuned Language Models
Viaarxiv icon

Predicting Above-Sentence Discourse Structure using Distant Supervision from Topic Segmentation

Add code
Dec 12, 2021
Figure 1 for Predicting Above-Sentence Discourse Structure using Distant Supervision from Topic Segmentation
Figure 2 for Predicting Above-Sentence Discourse Structure using Distant Supervision from Topic Segmentation
Figure 3 for Predicting Above-Sentence Discourse Structure using Distant Supervision from Topic Segmentation
Figure 4 for Predicting Above-Sentence Discourse Structure using Distant Supervision from Topic Segmentation
Viaarxiv icon

Human Interpretation and Exploitation of Self-attention Patterns in Transformers: A Case Study in Extractive Summarization

Add code
Dec 10, 2021
Figure 1 for Human Interpretation and Exploitation of Self-attention Patterns in Transformers: A Case Study in Extractive Summarization
Figure 2 for Human Interpretation and Exploitation of Self-attention Patterns in Transformers: A Case Study in Extractive Summarization
Figure 3 for Human Interpretation and Exploitation of Self-attention Patterns in Transformers: A Case Study in Extractive Summarization
Figure 4 for Human Interpretation and Exploitation of Self-attention Patterns in Transformers: A Case Study in Extractive Summarization
Viaarxiv icon

PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization

Add code
Oct 16, 2021
Figure 1 for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization
Figure 2 for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization
Figure 3 for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization
Figure 4 for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization
Viaarxiv icon

T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP

Add code
Aug 31, 2021
Figure 1 for T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP
Figure 2 for T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP
Figure 3 for T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP
Figure 4 for T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP
Viaarxiv icon

ConVIScope: Visual Analytics for Exploring Patient Conversations

Add code
Aug 30, 2021
Figure 1 for ConVIScope: Visual Analytics for Exploring Patient Conversations
Figure 2 for ConVIScope: Visual Analytics for Exploring Patient Conversations
Viaarxiv icon

Improving Unsupervised Dialogue Topic Segmentation with Utterance-Pair Coherence Scoring

Add code
Jun 12, 2021
Figure 1 for Improving Unsupervised Dialogue Topic Segmentation with Utterance-Pair Coherence Scoring
Figure 2 for Improving Unsupervised Dialogue Topic Segmentation with Utterance-Pair Coherence Scoring
Figure 3 for Improving Unsupervised Dialogue Topic Segmentation with Utterance-Pair Coherence Scoring
Figure 4 for Improving Unsupervised Dialogue Topic Segmentation with Utterance-Pair Coherence Scoring
Viaarxiv icon

W-RST: Towards a Weighted RST-style Discourse Framework

Add code
Jun 04, 2021
Figure 1 for W-RST: Towards a Weighted RST-style Discourse Framework
Figure 2 for W-RST: Towards a Weighted RST-style Discourse Framework
Figure 3 for W-RST: Towards a Weighted RST-style Discourse Framework
Figure 4 for W-RST: Towards a Weighted RST-style Discourse Framework
Viaarxiv icon

Demoting the Lead Bias in News Summarization via Alternating Adversarial Learning

Add code
May 29, 2021
Figure 1 for Demoting the Lead Bias in News Summarization via Alternating Adversarial Learning
Figure 2 for Demoting the Lead Bias in News Summarization via Alternating Adversarial Learning
Figure 3 for Demoting the Lead Bias in News Summarization via Alternating Adversarial Learning
Figure 4 for Demoting the Lead Bias in News Summarization via Alternating Adversarial Learning
Viaarxiv icon

Predicting Discourse Trees from Transformer-based Neural Summarizers

Add code
Apr 14, 2021
Figure 1 for Predicting Discourse Trees from Transformer-based Neural Summarizers
Figure 2 for Predicting Discourse Trees from Transformer-based Neural Summarizers
Figure 3 for Predicting Discourse Trees from Transformer-based Neural Summarizers
Figure 4 for Predicting Discourse Trees from Transformer-based Neural Summarizers
Viaarxiv icon