Picture for Shai Shalev-Shwartz

Shai Shalev-Shwartz

Hebrew University

FormulaOne: Measuring the Depth of Algorithmic Reasoning Beyond Competitive Programming

Add code
Jul 17, 2025
Viaarxiv icon

Artificial Expert Intelligence through PAC-reasoning

Add code
Dec 03, 2024
Viaarxiv icon

Untangling Lariats: Subgradient Following of Variationally Penalized Objectives

Add code
May 07, 2024
Viaarxiv icon

Jamba: A Hybrid Transformer-Mamba Language Model

Add code
Mar 28, 2024
Figure 1 for Jamba: A Hybrid Transformer-Mamba Language Model
Figure 2 for Jamba: A Hybrid Transformer-Mamba Language Model
Figure 3 for Jamba: A Hybrid Transformer-Mamba Language Model
Figure 4 for Jamba: A Hybrid Transformer-Mamba Language Model
Viaarxiv icon

Managing AI Risks in an Era of Rapid Progress

Add code
Oct 26, 2023
Viaarxiv icon

SubTuning: Efficient Finetuning for Multi-Task Learning

Add code
Feb 14, 2023
Viaarxiv icon

MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning

Add code
May 01, 2022
Figure 1 for MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning
Figure 2 for MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning
Figure 3 for MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning
Figure 4 for MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning
Viaarxiv icon

Standing on the Shoulders of Giant Frozen Language Models

Add code
Apr 21, 2022
Figure 1 for Standing on the Shoulders of Giant Frozen Language Models
Figure 2 for Standing on the Shoulders of Giant Frozen Language Models
Figure 3 for Standing on the Shoulders of Giant Frozen Language Models
Figure 4 for Standing on the Shoulders of Giant Frozen Language Models
Viaarxiv icon

Knowledge Distillation: Bad Models Can Be Good Role Models

Add code
Mar 28, 2022
Figure 1 for Knowledge Distillation: Bad Models Can Be Good Role Models
Figure 2 for Knowledge Distillation: Bad Models Can Be Good Role Models
Viaarxiv icon

The Connection Between Approximation, Depth Separation and Learnability in Neural Networks

Add code
Jan 31, 2021
Viaarxiv icon