Picture for Szymon Tworkowski

Szymon Tworkowski

Analysing The Impact of Sequence Composition on Language Model Pre-Training

Add code
Feb 21, 2024
Viaarxiv icon

Structured Packing in LLM Training Improves Long Context Utilization

Add code
Jan 02, 2024
Viaarxiv icon

Explaining Competitive-Level Programming Solutions using LLMs

Add code
Jul 11, 2023
Viaarxiv icon

Focused Transformer: Contrastive Training for Context Scaling

Add code
Jul 06, 2023
Viaarxiv icon

Magnushammer: A Transformer-based Approach to Premise Selection

Add code
Mar 08, 2023
Viaarxiv icon

Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers

Add code
May 22, 2022
Figure 1 for Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers
Figure 2 for Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers
Figure 3 for Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers
Figure 4 for Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers
Viaarxiv icon

Hierarchical Transformers Are More Efficient Language Models

Add code
Oct 26, 2021
Figure 1 for Hierarchical Transformers Are More Efficient Language Models
Figure 2 for Hierarchical Transformers Are More Efficient Language Models
Figure 3 for Hierarchical Transformers Are More Efficient Language Models
Figure 4 for Hierarchical Transformers Are More Efficient Language Models
Viaarxiv icon