Picture for Winnie Xu

Winnie Xu

KTO: Model Alignment as Prospect Theoretic Optimization

Add code
Feb 02, 2024
Figure 1 for KTO: Model Alignment as Prospect Theoretic Optimization
Figure 2 for KTO: Model Alignment as Prospect Theoretic Optimization
Figure 3 for KTO: Model Alignment as Prospect Theoretic Optimization
Figure 4 for KTO: Model Alignment as Prospect Theoretic Optimization
Viaarxiv icon

Neural Functional Transformers

Add code
May 22, 2023
Figure 1 for Neural Functional Transformers
Figure 2 for Neural Functional Transformers
Figure 3 for Neural Functional Transformers
Figure 4 for Neural Functional Transformers
Viaarxiv icon

Deep Latent State Space Models for Time-Series Generation

Add code
Dec 24, 2022
Figure 1 for Deep Latent State Space Models for Time-Series Generation
Figure 2 for Deep Latent State Space Models for Time-Series Generation
Figure 3 for Deep Latent State Space Models for Time-Series Generation
Figure 4 for Deep Latent State Space Models for Time-Series Generation
Viaarxiv icon

Language Model Cascades

Add code
Jul 28, 2022
Figure 1 for Language Model Cascades
Figure 2 for Language Model Cascades
Figure 3 for Language Model Cascades
Figure 4 for Language Model Cascades
Viaarxiv icon

Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt

Add code
Jun 16, 2022
Figure 1 for Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt
Figure 2 for Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt
Figure 3 for Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt
Figure 4 for Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt
Viaarxiv icon

Multi-Game Decision Transformers

Add code
May 30, 2022
Figure 1 for Multi-Game Decision Transformers
Figure 2 for Multi-Game Decision Transformers
Figure 3 for Multi-Game Decision Transformers
Figure 4 for Multi-Game Decision Transformers
Viaarxiv icon

Self-Similarity Priors: Neural Collages as Differentiable Fractal Representations

Add code
Apr 15, 2022
Figure 1 for Self-Similarity Priors: Neural Collages as Differentiable Fractal Representations
Figure 2 for Self-Similarity Priors: Neural Collages as Differentiable Fractal Representations
Figure 3 for Self-Similarity Priors: Neural Collages as Differentiable Fractal Representations
Figure 4 for Self-Similarity Priors: Neural Collages as Differentiable Fractal Representations
Viaarxiv icon

NoisyMix: Boosting Robustness by Combining Data Augmentations, Stability Training, and Noise Injections

Add code
Feb 02, 2022
Figure 1 for NoisyMix: Boosting Robustness by Combining Data Augmentations, Stability Training, and Noise Injections
Figure 2 for NoisyMix: Boosting Robustness by Combining Data Augmentations, Stability Training, and Noise Injections
Figure 3 for NoisyMix: Boosting Robustness by Combining Data Augmentations, Stability Training, and Noise Injections
Figure 4 for NoisyMix: Boosting Robustness by Combining Data Augmentations, Stability Training, and Noise Injections
Viaarxiv icon

Noisy Feature Mixup

Add code
Oct 05, 2021
Figure 1 for Noisy Feature Mixup
Figure 2 for Noisy Feature Mixup
Figure 3 for Noisy Feature Mixup
Figure 4 for Noisy Feature Mixup
Viaarxiv icon

Prioritized training on points that are learnable, worth learning, and not yet learned

Add code
Jul 06, 2021
Figure 1 for Prioritized training on points that are learnable, worth learning, and not yet learned
Figure 2 for Prioritized training on points that are learnable, worth learning, and not yet learned
Figure 3 for Prioritized training on points that are learnable, worth learning, and not yet learned
Figure 4 for Prioritized training on points that are learnable, worth learning, and not yet learned
Viaarxiv icon