Picture for Matthias Bethge

Matthias Bethge

Reflecting on the State of Rehearsal-free Continual Learning with Pretrained Models

Add code
Jun 13, 2024
Viaarxiv icon

Identifying latent state transition in non-linear dynamical systems

Add code
Jun 06, 2024
Viaarxiv icon

The Entropy Enigma: Success and Failure of Entropy Minimization

Add code
May 08, 2024
Viaarxiv icon

Wu's Method can Boost Symbolic AI to Rival Silver Medalists and AlphaGeometry to Outperform Gold Medalists at IMO Geometry

Add code
Apr 11, 2024
Figure 1 for Wu's Method can Boost Symbolic AI to Rival Silver Medalists and AlphaGeometry to Outperform Gold Medalists at IMO Geometry
Figure 2 for Wu's Method can Boost Symbolic AI to Rival Silver Medalists and AlphaGeometry to Outperform Gold Medalists at IMO Geometry
Figure 3 for Wu's Method can Boost Symbolic AI to Rival Silver Medalists and AlphaGeometry to Outperform Gold Medalists at IMO Geometry
Viaarxiv icon

No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance

Add code
Apr 08, 2024
Figure 1 for No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Figure 2 for No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Figure 3 for No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Figure 4 for No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance
Viaarxiv icon

Lifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress

Add code
Feb 29, 2024
Figure 1 for Lifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress
Figure 2 for Lifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress
Figure 3 for Lifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress
Figure 4 for Lifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress
Viaarxiv icon

Investigating Continual Pretraining in Large Language Models: Insights and Implications

Add code
Feb 27, 2024
Viaarxiv icon

Disentangled Continual Learning: Separating Memory Edits from Model Updates

Add code
Dec 27, 2023
Viaarxiv icon

Have we built machines that think like people?

Add code
Nov 27, 2023
Viaarxiv icon

Continual Learning: Applications and the Road Forward

Add code
Nov 21, 2023
Figure 1 for Continual Learning: Applications and the Road Forward
Figure 2 for Continual Learning: Applications and the Road Forward
Figure 3 for Continual Learning: Applications and the Road Forward
Viaarxiv icon