Picture for Ido Galil

Ido Galil

When Should LLMs Be Less Specific? Selective Abstraction for Reliable Long-Form Text Generation

Add code
Feb 13, 2026
Viaarxiv icon

Extending Puzzle for Mixture-of-Experts Reasoning Models with Application to GPT-OSS Acceleration

Add code
Feb 12, 2026
Viaarxiv icon

NVIDIA Nemotron 3: Efficient and Open Intelligence

Add code
Dec 24, 2025
Viaarxiv icon

Llama-Nemotron: Efficient Reasoning Models

Add code
May 02, 2025
Figure 1 for Llama-Nemotron: Efficient Reasoning Models
Figure 2 for Llama-Nemotron: Efficient Reasoning Models
Figure 3 for Llama-Nemotron: Efficient Reasoning Models
Figure 4 for Llama-Nemotron: Efficient Reasoning Models
Viaarxiv icon

FFN Fusion: Rethinking Sequential Computation in Large Language Models

Add code
Mar 24, 2025
Viaarxiv icon

No Data, No Optimization: A Lightweight Method To Disrupt Neural Networks With Sign-Flips

Add code
Feb 11, 2025
Viaarxiv icon

Padding Tone: A Mechanistic Analysis of Padding Tokens in T2I Models

Add code
Jan 12, 2025
Figure 1 for Padding Tone: A Mechanistic Analysis of Padding Tokens in T2I Models
Figure 2 for Padding Tone: A Mechanistic Analysis of Padding Tokens in T2I Models
Figure 3 for Padding Tone: A Mechanistic Analysis of Padding Tokens in T2I Models
Figure 4 for Padding Tone: A Mechanistic Analysis of Padding Tokens in T2I Models
Viaarxiv icon

Puzzle: Distillation-Based NAS for Inference-Optimized LLMs

Add code
Dec 03, 2024
Figure 1 for Puzzle: Distillation-Based NAS for Inference-Optimized LLMs
Figure 2 for Puzzle: Distillation-Based NAS for Inference-Optimized LLMs
Figure 3 for Puzzle: Distillation-Based NAS for Inference-Optimized LLMs
Figure 4 for Puzzle: Distillation-Based NAS for Inference-Optimized LLMs
Viaarxiv icon

Hierarchical Selective Classification

Add code
May 19, 2024
Viaarxiv icon

What Can We Learn From The Selective Prediction And Uncertainty Estimation Performance Of 523 Imagenet Classifiers

Add code
Feb 23, 2023
Viaarxiv icon