MixCE: Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies

Add code
May 26, 2023
Figure 1 for MixCE: Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies
Figure 2 for MixCE: Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies
Figure 3 for MixCE: Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies
Figure 4 for MixCE: Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: