Alert button
Picture for Shaoliang Nie

Shaoliang Nie

Alert button

On the Equivalence of Graph Convolution and Mixup

Add code
Bookmark button
Alert button
Sep 29, 2023
Xiaotian Han, Hanqing Zeng, Yu Chen, Shaoliang Nie, Jingzhou Liu, Kanika Narang, Zahra Shakeri, Karthik Abinav Sankararaman, Song Jiang, Madian Khabsa, Qifan Wang, Xia Hu

Figure 1 for On the Equivalence of Graph Convolution and Mixup
Figure 2 for On the Equivalence of Graph Convolution and Mixup
Figure 3 for On the Equivalence of Graph Convolution and Mixup
Figure 4 for On the Equivalence of Graph Convolution and Mixup
Viaarxiv icon

Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales

Add code
Bookmark button
Alert button
May 11, 2023
Brihi Joshi, Ziyi Liu, Sahana Ramnath, Aaron Chan, Zhewei Tong, Shaoliang Nie, Qifan Wang, Yejin Choi, Xiang Ren

Figure 1 for Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales
Figure 2 for Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales
Figure 3 for Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales
Figure 4 for Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales
Viaarxiv icon

AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning

Add code
Bookmark button
Alert button
Oct 12, 2022
Tao Yang, Jinghao Deng, Xiaojun Quan, Qifan Wang, Shaoliang Nie

Figure 1 for AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning
Figure 2 for AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning
Figure 3 for AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning
Figure 4 for AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning
Viaarxiv icon

FRAME: Evaluating Simulatability Metrics for Free-Text Rationales

Add code
Bookmark button
Alert button
Jul 02, 2022
Aaron Chan, Shaoliang Nie, Liang Tan, Xiaochang Peng, Hamed Firooz, Maziar Sanjabi, Xiang Ren

Figure 1 for FRAME: Evaluating Simulatability Metrics for Free-Text Rationales
Figure 2 for FRAME: Evaluating Simulatability Metrics for Free-Text Rationales
Figure 3 for FRAME: Evaluating Simulatability Metrics for Free-Text Rationales
Figure 4 for FRAME: Evaluating Simulatability Metrics for Free-Text Rationales
Viaarxiv icon

ER-TEST: Evaluating Explanation Regularization Methods for NLP Models

Add code
Bookmark button
Alert button
May 25, 2022
Brihi Joshi, Aaron Chan, Ziyi Liu, Shaoliang Nie, Maziar Sanjabi, Hamed Firooz, Xiang Ren

Figure 1 for ER-TEST: Evaluating Explanation Regularization Methods for NLP Models
Figure 2 for ER-TEST: Evaluating Explanation Regularization Methods for NLP Models
Figure 3 for ER-TEST: Evaluating Explanation Regularization Methods for NLP Models
Figure 4 for ER-TEST: Evaluating Explanation Regularization Methods for NLP Models
Viaarxiv icon

Detection, Disambiguation, Re-ranking: Autoregressive Entity Linking as a Multi-Task Problem

Add code
Bookmark button
Alert button
Apr 12, 2022
Khalil Mrini, Shaoliang Nie, Jiatao Gu, Sinong Wang, Maziar Sanjabi, Hamed Firooz

Figure 1 for Detection, Disambiguation, Re-ranking: Autoregressive Entity Linking as a Multi-Task Problem
Figure 2 for Detection, Disambiguation, Re-ranking: Autoregressive Entity Linking as a Multi-Task Problem
Figure 3 for Detection, Disambiguation, Re-ranking: Autoregressive Entity Linking as a Multi-Task Problem
Figure 4 for Detection, Disambiguation, Re-ranking: Autoregressive Entity Linking as a Multi-Task Problem
Viaarxiv icon

BARACK: Partially Supervised Group Robustness With Guarantees

Add code
Bookmark button
Alert button
Dec 31, 2021
Nimit Sohoni, Maziar Sanjabi, Nicolas Ballas, Aditya Grover, Shaoliang Nie, Hamed Firooz, Christopher Ré

Figure 1 for BARACK: Partially Supervised Group Robustness With Guarantees
Figure 2 for BARACK: Partially Supervised Group Robustness With Guarantees
Figure 3 for BARACK: Partially Supervised Group Robustness With Guarantees
Figure 4 for BARACK: Partially Supervised Group Robustness With Guarantees
Viaarxiv icon

UniREx: A Unified Learning Framework for Language Model Rationale Extraction

Add code
Bookmark button
Alert button
Dec 16, 2021
Aaron Chan, Maziar Sanjabi, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren, Hamed Firooz

Figure 1 for UniREx: A Unified Learning Framework for Language Model Rationale Extraction
Figure 2 for UniREx: A Unified Learning Framework for Language Model Rationale Extraction
Figure 3 for UniREx: A Unified Learning Framework for Language Model Rationale Extraction
Figure 4 for UniREx: A Unified Learning Framework for Language Model Rationale Extraction
Viaarxiv icon

Modality-specific Distillation

Add code
Bookmark button
Alert button
Jan 06, 2021
Woojeong Jin, Maziar Sanjabi, Shaoliang Nie, Liang Tan, Xiang Ren, Hamed Firooz

Figure 1 for Modality-specific Distillation
Figure 2 for Modality-specific Distillation
Figure 3 for Modality-specific Distillation
Figure 4 for Modality-specific Distillation
Viaarxiv icon

High Resolution Face Completion with Multiple Controllable Attributes via Fully End-to-End Progressive Generative Adversarial Networks

Add code
Bookmark button
Alert button
Jan 23, 2018
Zeyuan Chen, Shaoliang Nie, Tianfu Wu, Christopher G. Healey

Figure 1 for High Resolution Face Completion with Multiple Controllable Attributes via Fully End-to-End Progressive Generative Adversarial Networks
Figure 2 for High Resolution Face Completion with Multiple Controllable Attributes via Fully End-to-End Progressive Generative Adversarial Networks
Figure 3 for High Resolution Face Completion with Multiple Controllable Attributes via Fully End-to-End Progressive Generative Adversarial Networks
Figure 4 for High Resolution Face Completion with Multiple Controllable Attributes via Fully End-to-End Progressive Generative Adversarial Networks
Viaarxiv icon