Alert button
Picture for Christian Szegedy

Christian Szegedy

Alert button

Don't Trust: Verify -- Grounding LLM Quantitative Reasoning with Autoformalization

Add code
Bookmark button
Alert button
Mar 26, 2024
Jin Peng Zhou, Charles Staats, Wenda Li, Christian Szegedy, Kilian Q. Weinberger, Yuhuai Wu

Viaarxiv icon

Magnushammer: A Transformer-based Approach to Premise Selection

Add code
Bookmark button
Alert button
Mar 08, 2023
Maciej Mikuła, Szymon Antoniak, Szymon Tworkowski, Albert Qiaochu Jiang, Jin Peng Zhou, Christian Szegedy, Łukasz Kuciński, Piotr Miłoś, Yuhuai Wu

Figure 1 for Magnushammer: A Transformer-based Approach to Premise Selection
Figure 2 for Magnushammer: A Transformer-based Approach to Premise Selection
Figure 3 for Magnushammer: A Transformer-based Approach to Premise Selection
Figure 4 for Magnushammer: A Transformer-based Approach to Premise Selection
Viaarxiv icon

Autoformalization with Large Language Models

Add code
Bookmark button
Alert button
May 25, 2022
Yuhuai Wu, Albert Q. Jiang, Wenda Li, Markus N. Rabe, Charles Staats, Mateja Jamnik, Christian Szegedy

Figure 1 for Autoformalization with Large Language Models
Figure 2 for Autoformalization with Large Language Models
Figure 3 for Autoformalization with Large Language Models
Figure 4 for Autoformalization with Large Language Models
Viaarxiv icon

Memorizing Transformers

Add code
Bookmark button
Alert button
Mar 16, 2022
Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, Christian Szegedy

Figure 1 for Memorizing Transformers
Figure 2 for Memorizing Transformers
Figure 3 for Memorizing Transformers
Figure 4 for Memorizing Transformers
Viaarxiv icon

Hierarchical Transformers Are More Efficient Language Models

Add code
Bookmark button
Alert button
Oct 26, 2021
Piotr Nawrot, Szymon Tworkowski, Michał Tyrolski, Łukasz Kaiser, Yuhuai Wu, Christian Szegedy, Henryk Michalewski

Figure 1 for Hierarchical Transformers Are More Efficient Language Models
Figure 2 for Hierarchical Transformers Are More Efficient Language Models
Figure 3 for Hierarchical Transformers Are More Efficient Language Models
Figure 4 for Hierarchical Transformers Are More Efficient Language Models
Viaarxiv icon

LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning

Add code
Bookmark button
Alert button
Jan 15, 2021
Yuhuai Wu, Markus Rabe, Wenda Li, Jimmy Ba, Roger Grosse, Christian Szegedy

Figure 1 for LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning
Figure 2 for LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning
Figure 3 for LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning
Figure 4 for LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning
Viaarxiv icon

Language Modeling for Formal Mathematics

Add code
Bookmark button
Alert button
Jun 10, 2020
Markus N. Rabe, Dennis Lee, Kshitij Bansal, Christian Szegedy

Figure 1 for Language Modeling for Formal Mathematics
Figure 2 for Language Modeling for Formal Mathematics
Figure 3 for Language Modeling for Formal Mathematics
Figure 4 for Language Modeling for Formal Mathematics
Viaarxiv icon

Mathematical Reasoning in Latent Space

Add code
Bookmark button
Alert button
Sep 26, 2019
Dennis Lee, Christian Szegedy, Markus N. Rabe, Sarah M. Loos, Kshitij Bansal

Figure 1 for Mathematical Reasoning in Latent Space
Figure 2 for Mathematical Reasoning in Latent Space
Figure 3 for Mathematical Reasoning in Latent Space
Figure 4 for Mathematical Reasoning in Latent Space
Viaarxiv icon

Learning to Reason in Large Theories without Imitation

Add code
Bookmark button
Alert button
May 25, 2019
Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Christian Szegedy

Figure 1 for Learning to Reason in Large Theories without Imitation
Figure 2 for Learning to Reason in Large Theories without Imitation
Figure 3 for Learning to Reason in Large Theories without Imitation
Figure 4 for Learning to Reason in Large Theories without Imitation
Viaarxiv icon