Alert button
Picture for Pei Zhou

Pei Zhou

Alert button

School of Optoelectronic Science and Engineering and Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou 215006, China, Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province and Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, Suzhou 215006, China, Key Laboratory of Radar Imaging and Microwave Photonics, Ministry of Education, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China

Commonsense-Focused Dialogues for Response Generation: An Empirical Study

Add code
Bookmark button
Alert button
Sep 21, 2021
Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayatnia, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, Dilek Hakkani-Tur

Figure 1 for Commonsense-Focused Dialogues for Response Generation: An Empirical Study
Figure 2 for Commonsense-Focused Dialogues for Response Generation: An Empirical Study
Figure 3 for Commonsense-Focused Dialogues for Response Generation: An Empirical Study
Figure 4 for Commonsense-Focused Dialogues for Response Generation: An Empirical Study
Viaarxiv icon

An RF-source-free microwave photonic radar with an optically injected semiconductor laser for high-resolution detection and imaging

Add code
Bookmark button
Alert button
Jun 11, 2021
Pei Zhou, Rengheng Zhang, Nianqiang Li, Zhidong Jiang, Shilong Pan

Figure 1 for An RF-source-free microwave photonic radar with an optically injected semiconductor laser for high-resolution detection and imaging
Figure 2 for An RF-source-free microwave photonic radar with an optically injected semiconductor laser for high-resolution detection and imaging
Figure 3 for An RF-source-free microwave photonic radar with an optically injected semiconductor laser for high-resolution detection and imaging
Figure 4 for An RF-source-free microwave photonic radar with an optically injected semiconductor laser for high-resolution detection and imaging
Viaarxiv icon

Go Beyond Plain Fine-tuning: Improving Pretrained Models for Social Commonsense

Add code
Bookmark button
Alert button
May 12, 2021
Ting-Yun Chang, Yang Liu, Karthik Gopalakrishnan, Behnam Hedayatnia, Pei Zhou, Dilek Hakkani-Tur

Figure 1 for Go Beyond Plain Fine-tuning: Improving Pretrained Models for Social Commonsense
Figure 2 for Go Beyond Plain Fine-tuning: Improving Pretrained Models for Social Commonsense
Figure 3 for Go Beyond Plain Fine-tuning: Improving Pretrained Models for Social Commonsense
Figure 4 for Go Beyond Plain Fine-tuning: Improving Pretrained Models for Social Commonsense
Viaarxiv icon

Incorporating Commonsense Knowledge Graph in Pretrained Models for Social Commonsense Tasks

Add code
Bookmark button
Alert button
May 12, 2021
Ting-Yun Chang, Yang Liu, Karthik Gopalakrishnan, Behnam Hedayatnia, Pei Zhou, Dilek Hakkani-Tur

Figure 1 for Incorporating Commonsense Knowledge Graph in Pretrained Models for Social Commonsense Tasks
Figure 2 for Incorporating Commonsense Knowledge Graph in Pretrained Models for Social Commonsense Tasks
Figure 3 for Incorporating Commonsense Knowledge Graph in Pretrained Models for Social Commonsense Tasks
Figure 4 for Incorporating Commonsense Knowledge Graph in Pretrained Models for Social Commonsense Tasks
Viaarxiv icon

Probing Causal Common Sense in Dialogue Response Generation

Add code
Bookmark button
Alert button
Apr 21, 2021
Pei Zhou, Pegah Jandaghi, Bill Yuchen Lin, Justin Cho, Jay Pujara, Xiang Ren

Figure 1 for Probing Causal Common Sense in Dialogue Response Generation
Figure 2 for Probing Causal Common Sense in Dialogue Response Generation
Figure 3 for Probing Causal Common Sense in Dialogue Response Generation
Figure 4 for Probing Causal Common Sense in Dialogue Response Generation
Viaarxiv icon

Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources

Add code
Bookmark button
Alert button
Mar 21, 2021
Ninareh Mehrabi, Pei Zhou, Fred Morstatter, Jay Pujara, Xiang Ren, Aram Galstyan

Figure 1 for Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources
Figure 2 for Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources
Figure 3 for Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources
Figure 4 for Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources
Viaarxiv icon

Can BERT Reason? Logically Equivalent Probes for Evaluating the Inference Capabilities of Language Models

Add code
Bookmark button
Alert button
May 02, 2020
Pei Zhou, Rahul Khanna, Bill Yuchen Lin, Daniel Ho, Xiang Ren, Jay Pujara

Figure 1 for Can BERT Reason? Logically Equivalent Probes for Evaluating the Inference Capabilities of Language Models
Figure 2 for Can BERT Reason? Logically Equivalent Probes for Evaluating the Inference Capabilities of Language Models
Figure 3 for Can BERT Reason? Logically Equivalent Probes for Evaluating the Inference Capabilities of Language Models
Figure 4 for Can BERT Reason? Logically Equivalent Probes for Evaluating the Inference Capabilities of Language Models
Viaarxiv icon

CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning

Add code
Bookmark button
Alert button
Nov 09, 2019
Bill Yuchen Lin, Ming Shen, Yu Xing, Pei Zhou, Xiang Ren

Figure 1 for CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning
Figure 2 for CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning
Figure 3 for CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning
Figure 4 for CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning
Viaarxiv icon

Retrofitting Contextualized Word Embeddings with Paraphrases

Add code
Bookmark button
Alert button
Sep 12, 2019
Weijia Shi, Muhao Chen, Pei Zhou, Kai-Wei Chang

Figure 1 for Retrofitting Contextualized Word Embeddings with Paraphrases
Figure 2 for Retrofitting Contextualized Word Embeddings with Paraphrases
Figure 3 for Retrofitting Contextualized Word Embeddings with Paraphrases
Figure 4 for Retrofitting Contextualized Word Embeddings with Paraphrases
Viaarxiv icon