Picture for Stanislav Fort

Stanislav Fort

Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

Add code
Apr 12, 2022
Figure 1 for Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Figure 2 for Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Figure 3 for Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Figure 4 for Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Viaarxiv icon

Adversarial vulnerability of powerful near out-of-distribution detection

Add code
Jan 18, 2022
Figure 1 for Adversarial vulnerability of powerful near out-of-distribution detection
Figure 2 for Adversarial vulnerability of powerful near out-of-distribution detection
Figure 3 for Adversarial vulnerability of powerful near out-of-distribution detection
Figure 4 for Adversarial vulnerability of powerful near out-of-distribution detection
Viaarxiv icon

How many degrees of freedom do we need to train deep networks: a loss landscape perspective

Add code
Jul 13, 2021
Figure 1 for How many degrees of freedom do we need to train deep networks: a loss landscape perspective
Figure 2 for How many degrees of freedom do we need to train deep networks: a loss landscape perspective
Figure 3 for How many degrees of freedom do we need to train deep networks: a loss landscape perspective
Figure 4 for How many degrees of freedom do we need to train deep networks: a loss landscape perspective
Viaarxiv icon

A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection

Add code
Jun 16, 2021
Figure 1 for A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection
Figure 2 for A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection
Figure 3 for A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection
Figure 4 for A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection
Viaarxiv icon

Exploring the Limits of Out-of-Distribution Detection

Add code
Jun 06, 2021
Figure 1 for Exploring the Limits of Out-of-Distribution Detection
Figure 2 for Exploring the Limits of Out-of-Distribution Detection
Figure 3 for Exploring the Limits of Out-of-Distribution Detection
Figure 4 for Exploring the Limits of Out-of-Distribution Detection
Viaarxiv icon

Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error

Add code
May 27, 2021
Figure 1 for Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error
Figure 2 for Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error
Figure 3 for Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error
Figure 4 for Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error
Viaarxiv icon

Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes

Add code
Apr 23, 2021
Figure 1 for Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes
Figure 2 for Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes
Figure 3 for Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes
Figure 4 for Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes
Viaarxiv icon

Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel

Add code
Oct 28, 2020
Figure 1 for Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Figure 2 for Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Figure 3 for Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Figure 4 for Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel
Viaarxiv icon

Training independent subnetworks for robust prediction

Add code
Oct 13, 2020
Figure 1 for Training independent subnetworks for robust prediction
Figure 2 for Training independent subnetworks for robust prediction
Figure 3 for Training independent subnetworks for robust prediction
Figure 4 for Training independent subnetworks for robust prediction
Viaarxiv icon

The Break-Even Point on Optimization Trajectories of Deep Neural Networks

Add code
Feb 21, 2020
Figure 1 for The Break-Even Point on Optimization Trajectories of Deep Neural Networks
Figure 2 for The Break-Even Point on Optimization Trajectories of Deep Neural Networks
Figure 3 for The Break-Even Point on Optimization Trajectories of Deep Neural Networks
Figure 4 for The Break-Even Point on Optimization Trajectories of Deep Neural Networks
Viaarxiv icon