Alert button
Picture for Jieyu Zhao

Jieyu Zhao

Alert button

TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning

Jun 22, 2023
Ruijie Zheng, Xiyao Wang, Yanchao Sun, Shuang Ma, Jieyu Zhao, Huazhe Xu, Hal Daumé III, Furong Huang

Figure 1 for TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning
Figure 2 for TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning
Figure 3 for TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning
Figure 4 for TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning
Viaarxiv icon

Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN

Nov 16, 2022
Anaelia Ovalle, Sunipa Dev, Jieyu Zhao, Majid Sarrafzadeh, Kai-Wei Chang

Figure 1 for Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN
Figure 2 for Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN
Figure 3 for Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN
Figure 4 for Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN
Viaarxiv icon

Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers

Oct 28, 2022
Jieyu Zhao, Xuezhi Wang, Yao Qin, Jilin Chen, Kai-Wei Chang

Figure 1 for Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers
Figure 2 for Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers
Figure 3 for Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers
Figure 4 for Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers
Viaarxiv icon

SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models

Oct 13, 2022
Haozhe An, Zongxia Li, Jieyu Zhao, Rachel Rudinger

Figure 1 for SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Figure 2 for SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Figure 3 for SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Figure 4 for SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Viaarxiv icon

DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally Spreading Out Disinformation

May 25, 2022
Jingnong Qu, Liunian Harold Li, Jieyu Zhao, Sunipa Dev, Kai-Wei Chang

Figure 1 for DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally Spreading Out Disinformation
Figure 2 for DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally Spreading Out Disinformation
Figure 3 for DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally Spreading Out Disinformation
Figure 4 for DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally Spreading Out Disinformation
Viaarxiv icon

What do Bias Measures Measure?

Aug 07, 2021
Sunipa Dev, Emily Sheng, Jieyu Zhao, Jiao Sun, Yu Hou, Mattie Sanseverino, Jiin Kim, Nanyun Peng, Kai-Wei Chang

Figure 1 for What do Bias Measures Measure?
Viaarxiv icon

Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?

Jun 02, 2021
Jieyu Zhao, Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Kai-Wei Chang

Figure 1 for Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?
Figure 2 for Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?
Figure 3 for Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?
Figure 4 for Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?
Viaarxiv icon

Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation

Apr 12, 2021
Chong Zhang, Jieyu Zhao, Huan Zhang, Kai-Wei Chang, Cho-Jui Hsieh

Figure 1 for Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation
Figure 2 for Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation
Figure 3 for Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation
Figure 4 for Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation
Viaarxiv icon

LOGAN: Local Group Bias Detection by Clustering

Oct 06, 2020
Jieyu Zhao, Kai-Wei Chang

Figure 1 for LOGAN: Local Group Bias Detection by Clustering
Figure 2 for LOGAN: Local Group Bias Detection by Clustering
Figure 3 for LOGAN: Local Group Bias Detection by Clustering
Figure 4 for LOGAN: Local Group Bias Detection by Clustering
Viaarxiv icon

Fairness-Aware Explainable Recommendation over Knowledge Graphs

Jun 28, 2020
Zuohui Fu, Yikun Xian, Ruoyuan Gao, Jieyu Zhao, Qiaoying Huang, Yingqiang Ge, Shuyuan Xu, Shijie Geng, Chirag Shah, Yongfeng Zhang, Gerard de Melo

Figure 1 for Fairness-Aware Explainable Recommendation over Knowledge Graphs
Figure 2 for Fairness-Aware Explainable Recommendation over Knowledge Graphs
Figure 3 for Fairness-Aware Explainable Recommendation over Knowledge Graphs
Figure 4 for Fairness-Aware Explainable Recommendation over Knowledge Graphs
Viaarxiv icon