Alert button
Picture for Hanjie Chen

Hanjie Chen

Alert button

Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation

Add code
Bookmark button
Alert button
Mar 23, 2022
Hanjie Chen, Yangfeng Ji

Figure 1 for Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation
Figure 2 for Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation
Figure 3 for Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation
Figure 4 for Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation
Viaarxiv icon

Explaining Prediction Uncertainty of Pre-trained Language Models by Detecting Uncertain Words in Inputs

Add code
Bookmark button
Alert button
Jan 11, 2022
Hanjie Chen, Yangfeng Ji

Figure 1 for Explaining Prediction Uncertainty of Pre-trained Language Models by Detecting Uncertain Words in Inputs
Figure 2 for Explaining Prediction Uncertainty of Pre-trained Language Models by Detecting Uncertain Words in Inputs
Figure 3 for Explaining Prediction Uncertainty of Pre-trained Language Models by Detecting Uncertain Words in Inputs
Figure 4 for Explaining Prediction Uncertainty of Pre-trained Language Models by Detecting Uncertain Words in Inputs
Viaarxiv icon

Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing

Add code
Bookmark button
Alert button
Aug 11, 2021
Sanchit Sinha, Hanjie Chen, Arshdeep Sekhon, Yangfeng Ji, Yanjun Qi

Figure 1 for Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing
Figure 2 for Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing
Figure 3 for Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing
Figure 4 for Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing
Viaarxiv icon

Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group Masks

Add code
Bookmark button
Alert button
Apr 13, 2021
Hanjie Chen, Song Feng, Jatin Ganhotra, Hui Wan, Chulaka Gunasekara, Sachindra Joshi, Yangfeng Ji

Figure 1 for Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group Masks
Figure 2 for Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group Masks
Figure 3 for Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group Masks
Figure 4 for Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group Masks
Viaarxiv icon

Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers

Add code
Bookmark button
Alert button
Oct 01, 2020
Hanjie Chen, Yangfeng Ji

Figure 1 for Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
Figure 2 for Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
Figure 3 for Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
Figure 4 for Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
Viaarxiv icon

Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection

Add code
Bookmark button
Alert button
Apr 04, 2020
Hanjie Chen, Guangtao Zheng, Yangfeng Ji

Figure 1 for Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection
Figure 2 for Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection
Figure 3 for Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection
Figure 4 for Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection
Viaarxiv icon

Improving the Explainability of Neural Sentiment Classifiers via Data Augmentation

Add code
Bookmark button
Alert button
Oct 09, 2019
Hanjie Chen, Yangfeng Ji

Figure 1 for Improving the Explainability of Neural Sentiment Classifiers via Data Augmentation
Figure 2 for Improving the Explainability of Neural Sentiment Classifiers via Data Augmentation
Figure 3 for Improving the Explainability of Neural Sentiment Classifiers via Data Augmentation
Figure 4 for Improving the Explainability of Neural Sentiment Classifiers via Data Augmentation
Viaarxiv icon