Picture for Wenqing Chen

Wenqing Chen

LLM-Guided Multi-View Hypergraph Learning for Human-Centric Explainable Recommendation

Add code
Jan 16, 2024
Figure 1 for LLM-Guided Multi-View Hypergraph Learning for Human-Centric Explainable Recommendation
Figure 2 for LLM-Guided Multi-View Hypergraph Learning for Human-Centric Explainable Recommendation
Figure 3 for LLM-Guided Multi-View Hypergraph Learning for Human-Centric Explainable Recommendation
Figure 4 for LLM-Guided Multi-View Hypergraph Learning for Human-Centric Explainable Recommendation
Viaarxiv icon

Chain-of-Thought Tuning: Masked Language Models can also Think Step By Step in Natural Language Understanding

Add code
Oct 18, 2023
Viaarxiv icon

Accurate Use of Label Dependency in Multi-Label Text Classification Through the Lens of Causality

Add code
Oct 11, 2023
Viaarxiv icon

Unlock the Potential of Counterfactually-Augmented Data in Out-Of-Distribution Generalization

Add code
Oct 10, 2023
Figure 1 for Unlock the Potential of Counterfactually-Augmented Data in Out-Of-Distribution Generalization
Figure 2 for Unlock the Potential of Counterfactually-Augmented Data in Out-Of-Distribution Generalization
Figure 3 for Unlock the Potential of Counterfactually-Augmented Data in Out-Of-Distribution Generalization
Figure 4 for Unlock the Potential of Counterfactually-Augmented Data in Out-Of-Distribution Generalization
Viaarxiv icon

Improving the Out-Of-Distribution Generalization Capability of Language Models: Counterfactually-Augmented Data is not Enough

Add code
Feb 18, 2023
Figure 1 for Improving the Out-Of-Distribution Generalization Capability of Language Models: Counterfactually-Augmented Data is not Enough
Figure 2 for Improving the Out-Of-Distribution Generalization Capability of Language Models: Counterfactually-Augmented Data is not Enough
Figure 3 for Improving the Out-Of-Distribution Generalization Capability of Language Models: Counterfactually-Augmented Data is not Enough
Figure 4 for Improving the Out-Of-Distribution Generalization Capability of Language Models: Counterfactually-Augmented Data is not Enough
Viaarxiv icon

MaxGNR: A Dynamic Weight Strategy via Maximizing Gradient-to-Noise Ratio for Multi-Task Learning

Add code
Feb 18, 2023
Figure 1 for MaxGNR: A Dynamic Weight Strategy via Maximizing Gradient-to-Noise Ratio for Multi-Task Learning
Figure 2 for MaxGNR: A Dynamic Weight Strategy via Maximizing Gradient-to-Noise Ratio for Multi-Task Learning
Figure 3 for MaxGNR: A Dynamic Weight Strategy via Maximizing Gradient-to-Noise Ratio for Multi-Task Learning
Figure 4 for MaxGNR: A Dynamic Weight Strategy via Maximizing Gradient-to-Noise Ratio for Multi-Task Learning
Viaarxiv icon

Dependent Multi-Task Learning with Causal Intervention for Image Captioning

Add code
May 18, 2021
Figure 1 for Dependent Multi-Task Learning with Causal Intervention for Image Captioning
Figure 2 for Dependent Multi-Task Learning with Causal Intervention for Image Captioning
Figure 3 for Dependent Multi-Task Learning with Causal Intervention for Image Captioning
Figure 4 for Dependent Multi-Task Learning with Causal Intervention for Image Captioning
Viaarxiv icon

Disentangled Makeup Transfer with Generative Adversarial Network

Add code
Jul 02, 2019
Figure 1 for Disentangled Makeup Transfer with Generative Adversarial Network
Figure 2 for Disentangled Makeup Transfer with Generative Adversarial Network
Figure 3 for Disentangled Makeup Transfer with Generative Adversarial Network
Figure 4 for Disentangled Makeup Transfer with Generative Adversarial Network
Viaarxiv icon

DS-VIO: Robust and Efficient Stereo Visual Inertial Odometry based on Dual Stage EKF

Add code
May 02, 2019
Figure 1 for DS-VIO: Robust and Efficient Stereo Visual Inertial Odometry based on Dual Stage EKF
Figure 2 for DS-VIO: Robust and Efficient Stereo Visual Inertial Odometry based on Dual Stage EKF
Figure 3 for DS-VIO: Robust and Efficient Stereo Visual Inertial Odometry based on Dual Stage EKF
Figure 4 for DS-VIO: Robust and Efficient Stereo Visual Inertial Odometry based on Dual Stage EKF
Viaarxiv icon

Show, Attend and Translate: Unpaired Multi-Domain Image-to-Image Translation with Visual Attention

Add code
Nov 19, 2018
Figure 1 for Show, Attend and Translate: Unpaired Multi-Domain Image-to-Image Translation with Visual Attention
Figure 2 for Show, Attend and Translate: Unpaired Multi-Domain Image-to-Image Translation with Visual Attention
Figure 3 for Show, Attend and Translate: Unpaired Multi-Domain Image-to-Image Translation with Visual Attention
Figure 4 for Show, Attend and Translate: Unpaired Multi-Domain Image-to-Image Translation with Visual Attention
Viaarxiv icon