Alert button
Picture for Zhiyuan Liu

Zhiyuan Liu

Alert button

Decoder-Only or Encoder-Decoder? Interpreting Language Model as a Regularized Encoder-Decoder

Add code
Bookmark button
Alert button
Apr 08, 2023
Zihao Fu, Wai Lam, Qian Yu, Anthony Man-Cho So, Shengding Hu, Zhiyuan Liu, Nigel Collier

Figure 1 for Decoder-Only or Encoder-Decoder? Interpreting Language Model as a Regularized Encoder-Decoder
Figure 2 for Decoder-Only or Encoder-Decoder? Interpreting Language Model as a Regularized Encoder-Decoder
Figure 3 for Decoder-Only or Encoder-Decoder? Interpreting Language Model as a Regularized Encoder-Decoder
Figure 4 for Decoder-Only or Encoder-Decoder? Interpreting Language Model as a Regularized Encoder-Decoder
Viaarxiv icon

Human Emotion Knowledge Representation Emerges in Large Language Model and Supports Discrete Emotion Inference

Add code
Bookmark button
Alert button
Feb 21, 2023
Ming Li, Yusheng Su, Hsiu-Yuan Huang, Jiali Cheng, Xin Hu, Xinmiao Zhang, Huadong Wang, Yujia Qin, Xiaozhi Wang, Zhiyuan Liu, Dan Zhang

Figure 1 for Human Emotion Knowledge Representation Emerges in Large Language Model and Supports Discrete Emotion Inference
Figure 2 for Human Emotion Knowledge Representation Emerges in Large Language Model and Supports Discrete Emotion Inference
Figure 3 for Human Emotion Knowledge Representation Emerges in Large Language Model and Supports Discrete Emotion Inference
Figure 4 for Human Emotion Knowledge Representation Emerges in Large Language Model and Supports Discrete Emotion Inference
Viaarxiv icon

READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises

Add code
Bookmark button
Alert button
Feb 14, 2023
Chenglei Si, Zhengyan Zhang, Yingfa Chen, Xiaozhi Wang, Zhiyuan Liu, Maosong Sun

Figure 1 for READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises
Figure 2 for READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises
Figure 3 for READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises
Figure 4 for READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises
Viaarxiv icon

Semi-supervised Large-scale Fiber Detection in Material Images with Synthetic Data

Add code
Bookmark button
Alert button
Feb 10, 2023
Lan Fu, Zhiyuan Liu, Jinlong Li, Jeff Simmons, Hongkai Yu, Song Wang

Figure 1 for Semi-supervised Large-scale Fiber Detection in Material Images with Synthetic Data
Figure 2 for Semi-supervised Large-scale Fiber Detection in Material Images with Synthetic Data
Figure 3 for Semi-supervised Large-scale Fiber Detection in Material Images with Synthetic Data
Figure 4 for Semi-supervised Large-scale Fiber Detection in Material Images with Synthetic Data
Viaarxiv icon

Decoder Tuning: Efficient Language Understanding as Decoding

Add code
Bookmark button
Alert button
Dec 16, 2022
Ganqu Cui, Wentao Li, Ning Ding, Longtao Huang, Zhiyuan Liu, Maosong Sun

Figure 1 for Decoder Tuning: Efficient Language Understanding as Decoding
Figure 2 for Decoder Tuning: Efficient Language Understanding as Decoding
Figure 3 for Decoder Tuning: Efficient Language Understanding as Decoding
Figure 4 for Decoder Tuning: Efficient Language Understanding as Decoding
Viaarxiv icon

Mul-GAD: a semi-supervised graph anomaly detection framework via aggregating multi-view information

Add code
Bookmark button
Alert button
Dec 11, 2022
Zhiyuan Liu, Chunjie Cao, Jingzhang Sun

Figure 1 for Mul-GAD: a semi-supervised graph anomaly detection framework via aggregating multi-view information
Figure 2 for Mul-GAD: a semi-supervised graph anomaly detection framework via aggregating multi-view information
Figure 3 for Mul-GAD: a semi-supervised graph anomaly detection framework via aggregating multi-view information
Figure 4 for Mul-GAD: a semi-supervised graph anomaly detection framework via aggregating multi-view information
Viaarxiv icon

Visually Grounded Commonsense Knowledge Acquisition

Add code
Bookmark button
Alert button
Nov 22, 2022
Yuan Yao, Tianyu Yu, Ao Zhang, Mengdi Li, Ruobing Xie, Cornelius Weber, Zhiyuan Liu, Haitao Zheng, Stefan Wermter, Tat-Seng Chua, Maosong Sun

Figure 1 for Visually Grounded Commonsense Knowledge Acquisition
Figure 2 for Visually Grounded Commonsense Knowledge Acquisition
Figure 3 for Visually Grounded Commonsense Knowledge Acquisition
Figure 4 for Visually Grounded Commonsense Knowledge Acquisition
Viaarxiv icon

Finding Skill Neurons in Pre-trained Transformer-based Language Models

Add code
Bookmark button
Alert button
Nov 14, 2022
Xiaozhi Wang, Kaiyue Wen, Zhengyan Zhang, Lei Hou, Zhiyuan Liu, Juanzi Li

Figure 1 for Finding Skill Neurons in Pre-trained Transformer-based Language Models
Figure 2 for Finding Skill Neurons in Pre-trained Transformer-based Language Models
Figure 3 for Finding Skill Neurons in Pre-trained Transformer-based Language Models
Figure 4 for Finding Skill Neurons in Pre-trained Transformer-based Language Models
Viaarxiv icon

MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction

Add code
Bookmark button
Alert button
Nov 14, 2022
Xiaozhi Wang, Yulin Chen, Ning Ding, Hao Peng, Zimu Wang, Yankai Lin, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie Zhou

Figure 1 for MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction
Figure 2 for MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction
Figure 3 for MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction
Figure 4 for MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction
Viaarxiv icon

FPT: Improving Prompt Tuning Efficiency via Progressive Training

Add code
Bookmark button
Alert button
Nov 13, 2022
Yufei Huang, Yujia Qin, Huadong Wang, Yichun Yin, Maosong Sun, Zhiyuan Liu, Qun Liu

Figure 1 for FPT: Improving Prompt Tuning Efficiency via Progressive Training
Figure 2 for FPT: Improving Prompt Tuning Efficiency via Progressive Training
Figure 3 for FPT: Improving Prompt Tuning Efficiency via Progressive Training
Figure 4 for FPT: Improving Prompt Tuning Efficiency via Progressive Training
Viaarxiv icon