Alert button
Picture for Clemens Winter

Clemens Winter

Alert button

SPT-NRTL: A physics-guided machine learning model to predict thermodynamically consistent activity coefficients

Sep 09, 2022
Benedikt Winter, Clemens Winter, Timm Esper, Johannes Schilling, André Bardow

Figure 1 for SPT-NRTL: A physics-guided machine learning model to predict thermodynamically consistent activity coefficients
Figure 2 for SPT-NRTL: A physics-guided machine learning model to predict thermodynamically consistent activity coefficients
Figure 3 for SPT-NRTL: A physics-guided machine learning model to predict thermodynamically consistent activity coefficients
Figure 4 for SPT-NRTL: A physics-guided machine learning model to predict thermodynamically consistent activity coefficients
Viaarxiv icon

A smile is all you need: Predicting limiting activity coefficients from SMILES with natural language processing

Jun 15, 2022
Benedikt Winter, Clemens Winter, Johannes Schilling, André Bardow

Figure 1 for A smile is all you need: Predicting limiting activity coefficients from SMILES with natural language processing
Figure 2 for A smile is all you need: Predicting limiting activity coefficients from SMILES with natural language processing
Figure 3 for A smile is all you need: Predicting limiting activity coefficients from SMILES with natural language processing
Figure 4 for A smile is all you need: Predicting limiting activity coefficients from SMILES with natural language processing
Viaarxiv icon

Evaluating Large Language Models Trained on Code

Jul 14, 2021
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, Wojciech Zaremba

Figure 1 for Evaluating Large Language Models Trained on Code
Figure 2 for Evaluating Large Language Models Trained on Code
Figure 3 for Evaluating Large Language Models Trained on Code
Figure 4 for Evaluating Large Language Models Trained on Code
Viaarxiv icon

A Generalizable Approach to Learning Optimizers

Jun 07, 2021
Diogo Almeida, Clemens Winter, Jie Tang, Wojciech Zaremba

Figure 1 for A Generalizable Approach to Learning Optimizers
Figure 2 for A Generalizable Approach to Learning Optimizers
Figure 3 for A Generalizable Approach to Learning Optimizers
Figure 4 for A Generalizable Approach to Learning Optimizers
Viaarxiv icon

Language Models are Few-Shot Learners

Jun 05, 2020
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei

Figure 1 for Language Models are Few-Shot Learners
Figure 2 for Language Models are Few-Shot Learners
Figure 3 for Language Models are Few-Shot Learners
Figure 4 for Language Models are Few-Shot Learners
Viaarxiv icon