Alert button
Picture for Kyunghyun Cho

Kyunghyun Cho

Alert button

Protein Discovery with Discrete Walk-Jump Sampling

Jun 08, 2023
Nathan C. Frey, Daniel Berenberg, Karina Zadorozhny, Joseph Kleinhenz, Julien Lafrance-Vanasse, Isidro Hotzel, Yan Wu, Stephen Ra, Richard Bonneau, Kyunghyun Cho, Andreas Loukas, Vladimir Gligorijevic, Saeed Saremi

Figure 1 for Protein Discovery with Discrete Walk-Jump Sampling
Figure 2 for Protein Discovery with Discrete Walk-Jump Sampling
Figure 3 for Protein Discovery with Discrete Walk-Jump Sampling
Figure 4 for Protein Discovery with Discrete Walk-Jump Sampling
Viaarxiv icon

BOtied: Multi-objective Bayesian optimization with tied multivariate ranks

Jun 01, 2023
Ji Won Park, Nataša Tagasovska, Michael Maser, Stephen Ra, Kyunghyun Cho

Figure 1 for BOtied: Multi-objective Bayesian optimization with tied multivariate ranks
Figure 2 for BOtied: Multi-objective Bayesian optimization with tied multivariate ranks
Figure 3 for BOtied: Multi-objective Bayesian optimization with tied multivariate ranks
Figure 4 for BOtied: Multi-objective Bayesian optimization with tied multivariate ranks
Viaarxiv icon

Protein Design with Guided Discrete Diffusion

May 31, 2023
Nate Gruver, Samuel Stanton, Nathan C. Frey, Tim G. J. Rudner, Isidro Hotzel, Julien Lafrance-Vanasse, Arvind Rajpal, Kyunghyun Cho, Andrew Gordon Wilson

Figure 1 for Protein Design with Guided Discrete Diffusion
Figure 2 for Protein Design with Guided Discrete Diffusion
Figure 3 for Protein Design with Guided Discrete Diffusion
Figure 4 for Protein Design with Guided Discrete Diffusion
Viaarxiv icon

Two Failures of Self-Consistency in the Multi-Step Reasoning of LLMs

May 23, 2023
Angelica Chen, Jason Phang, Alicia Parrish, Vishakh Padmakumar, Chen Zhao, Samuel R. Bowman, Kyunghyun Cho

Figure 1 for Two Failures of Self-Consistency in the Multi-Step Reasoning of LLMs
Figure 2 for Two Failures of Self-Consistency in the Multi-Step Reasoning of LLMs
Figure 3 for Two Failures of Self-Consistency in the Multi-Step Reasoning of LLMs
Figure 4 for Two Failures of Self-Consistency in the Multi-Step Reasoning of LLMs
Viaarxiv icon

Towards Understanding and Improving GFlowNet Training

May 11, 2023
Max W. Shen, Emmanuel Bengio, Ehsan Hajiramezanali, Andreas Loukas, Kyunghyun Cho, Tommaso Biancalani

Figure 1 for Towards Understanding and Improving GFlowNet Training
Figure 2 for Towards Understanding and Improving GFlowNet Training
Figure 3 for Towards Understanding and Improving GFlowNet Training
Figure 4 for Towards Understanding and Improving GFlowNet Training
Viaarxiv icon

A Comparison of Semi-Supervised Learning Techniques for Streaming ASR at Scale

Apr 19, 2023
Cal Peyser, Michael Picheny, Kyunghyun Cho, Rohit Prabhavalkar, Ronny Huang, Tara Sainath

Figure 1 for A Comparison of Semi-Supervised Learning Techniques for Streaming ASR at Scale
Figure 2 for A Comparison of Semi-Supervised Learning Techniques for Streaming ASR at Scale
Figure 3 for A Comparison of Semi-Supervised Learning Techniques for Streaming ASR at Scale
Figure 4 for A Comparison of Semi-Supervised Learning Techniques for Streaming ASR at Scale
Viaarxiv icon

Training Language Models with Language Feedback at Scale

Apr 09, 2023
Jérémy Scheurer, Jon Ander Campos, Tomasz Korbak, Jun Shern Chan, Angelica Chen, Kyunghyun Cho, Ethan Perez

Figure 1 for Training Language Models with Language Feedback at Scale
Figure 2 for Training Language Models with Language Feedback at Scale
Figure 3 for Training Language Models with Language Feedback at Scale
Figure 4 for Training Language Models with Language Feedback at Scale
Viaarxiv icon

Improving Code Generation by Training with Natural Language Feedback

Mar 28, 2023
Angelica Chen, Jérémy Scheurer, Tomasz Korbak, Jon Ander Campos, Jun Shern Chan, Samuel R. Bowman, Kyunghyun Cho, Ethan Perez

Figure 1 for Improving Code Generation by Training with Natural Language Feedback
Figure 2 for Improving Code Generation by Training with Natural Language Feedback
Figure 3 for Improving Code Generation by Training with Natural Language Feedback
Figure 4 for Improving Code Generation by Training with Natural Language Feedback
Viaarxiv icon

Unsupervised Learning of Initialization in Deep Neural Networks via Maximum Mean Discrepancy

Feb 08, 2023
Cheolhyoung Lee, Kyunghyun Cho

Figure 1 for Unsupervised Learning of Initialization in Deep Neural Networks via Maximum Mean Discrepancy
Figure 2 for Unsupervised Learning of Initialization in Deep Neural Networks via Maximum Mean Discrepancy
Figure 3 for Unsupervised Learning of Initialization in Deep Neural Networks via Maximum Mean Discrepancy
Figure 4 for Unsupervised Learning of Initialization in Deep Neural Networks via Maximum Mean Discrepancy
Viaarxiv icon