Alert button
Picture for Wen Xiao

Wen Xiao

Alert button

University of British Columbia

Safety Alignment in NLP Tasks: Weakly Aligned Summarization as an In-Context Attack

Add code
Bookmark button
Alert button
Dec 12, 2023
Yu Fu, Yufei Li, Wen Xiao, Cong Liu, Yue Dong

Figure 1 for Safety Alignment in NLP Tasks: Weakly Aligned Summarization as an In-Context Attack
Figure 2 for Safety Alignment in NLP Tasks: Weakly Aligned Summarization as an In-Context Attack
Figure 3 for Safety Alignment in NLP Tasks: Weakly Aligned Summarization as an In-Context Attack
Figure 4 for Safety Alignment in NLP Tasks: Weakly Aligned Summarization as an In-Context Attack
Viaarxiv icon

Visual Analytics for Generative Transformer Models

Add code
Bookmark button
Alert button
Nov 21, 2023
Raymond Li, Ruixin Yang, Wen Xiao, Ahmed AbuRaed, Gabriel Murray, Giuseppe Carenini

Viaarxiv icon

ChatGPT-steered Editing Instructor for Customization of Abstractive Summarization

Add code
Bookmark button
Alert button
May 04, 2023
Wen Xiao, Yujia Xie, Giuseppe Carenini, Pengcheng He

Figure 1 for ChatGPT-steered Editing Instructor for Customization of Abstractive Summarization
Figure 2 for ChatGPT-steered Editing Instructor for Customization of Abstractive Summarization
Figure 3 for ChatGPT-steered Editing Instructor for Customization of Abstractive Summarization
Figure 4 for ChatGPT-steered Editing Instructor for Customization of Abstractive Summarization
Viaarxiv icon

Discourse Structure Extraction from Pre-Trained and Fine-Tuned Language Models in Dialogues

Add code
Bookmark button
Alert button
Feb 12, 2023
Chuyuan Li, Patrick Huber, Wen Xiao, Maxime Amblard, Chloé Braud, Giuseppe Carenini

Figure 1 for Discourse Structure Extraction from Pre-Trained and Fine-Tuned Language Models in Dialogues
Figure 2 for Discourse Structure Extraction from Pre-Trained and Fine-Tuned Language Models in Dialogues
Figure 3 for Discourse Structure Extraction from Pre-Trained and Fine-Tuned Language Models in Dialogues
Figure 4 for Discourse Structure Extraction from Pre-Trained and Fine-Tuned Language Models in Dialogues
Viaarxiv icon

Attend to the Right Context: A Plug-and-Play Module for Content-Controllable Summarization

Add code
Bookmark button
Alert button
Dec 21, 2022
Wen Xiao, Lesly Miculicich, Yang Liu, Pengcheng He, Giuseppe Carenini

Figure 1 for Attend to the Right Context: A Plug-and-Play Module for Content-Controllable Summarization
Figure 2 for Attend to the Right Context: A Plug-and-Play Module for Content-Controllable Summarization
Figure 3 for Attend to the Right Context: A Plug-and-Play Module for Content-Controllable Summarization
Figure 4 for Attend to the Right Context: A Plug-and-Play Module for Content-Controllable Summarization
Viaarxiv icon

Entity-based SpanCopy for Abstractive Summarization to Improve the Factual Consistency

Add code
Bookmark button
Alert button
Sep 07, 2022
Wen Xiao, Giuseppe Carenini

Figure 1 for Entity-based SpanCopy for Abstractive Summarization to Improve the Factual Consistency
Figure 2 for Entity-based SpanCopy for Abstractive Summarization to Improve the Factual Consistency
Figure 3 for Entity-based SpanCopy for Abstractive Summarization to Improve the Factual Consistency
Figure 4 for Entity-based SpanCopy for Abstractive Summarization to Improve the Factual Consistency
Viaarxiv icon

SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point Clouds

Add code
Bookmark button
Alert button
Jan 12, 2022
Qingyong Hu, Bo Yang, Sheikh Khalid, Wen Xiao, Niki Trigoni, Andrew Markham

Figure 1 for SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point Clouds
Figure 2 for SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point Clouds
Figure 3 for SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point Clouds
Figure 4 for SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point Clouds
Viaarxiv icon

Human Interpretation and Exploitation of Self-attention Patterns in Transformers: A Case Study in Extractive Summarization

Add code
Bookmark button
Alert button
Dec 10, 2021
Raymond Li, Wen Xiao, Lanjun Wang, Giuseppe Carenini

Figure 1 for Human Interpretation and Exploitation of Self-attention Patterns in Transformers: A Case Study in Extractive Summarization
Figure 2 for Human Interpretation and Exploitation of Self-attention Patterns in Transformers: A Case Study in Extractive Summarization
Figure 3 for Human Interpretation and Exploitation of Self-attention Patterns in Transformers: A Case Study in Extractive Summarization
Figure 4 for Human Interpretation and Exploitation of Self-attention Patterns in Transformers: A Case Study in Extractive Summarization
Viaarxiv icon

PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization

Add code
Bookmark button
Alert button
Oct 16, 2021
Wen Xiao, Iz Beltagy, Giuseppe Carenini, Arman Cohan

Figure 1 for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization
Figure 2 for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization
Figure 3 for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization
Figure 4 for PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization
Viaarxiv icon

T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP

Add code
Bookmark button
Alert button
Aug 31, 2021
Raymond Li, Wen Xiao, Lanjun Wang, Hyeju Jang, Giuseppe Carenini

Figure 1 for T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP
Figure 2 for T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP
Figure 3 for T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP
Figure 4 for T3-Vis: a visual analytic framework for Training and fine-Tuning Transformers in NLP
Viaarxiv icon