Picture for Tijmen Blankevoort

Tijmen Blankevoort

SpinQuant: LLM quantization with learned rotations

Add code
May 28, 2024
Viaarxiv icon

Bitune: Bidirectional Instruction-Tuning

Add code
May 23, 2024
Figure 1 for Bitune: Bidirectional Instruction-Tuning
Figure 2 for Bitune: Bidirectional Instruction-Tuning
Figure 3 for Bitune: Bidirectional Instruction-Tuning
Figure 4 for Bitune: Bidirectional Instruction-Tuning
Viaarxiv icon

Think Big, Generate Quick: LLM-to-SLM for Fast Autoregressive Decoding

Add code
Feb 26, 2024
Figure 1 for Think Big, Generate Quick: LLM-to-SLM for Fast Autoregressive Decoding
Figure 2 for Think Big, Generate Quick: LLM-to-SLM for Fast Autoregressive Decoding
Figure 3 for Think Big, Generate Quick: LLM-to-SLM for Fast Autoregressive Decoding
Figure 4 for Think Big, Generate Quick: LLM-to-SLM for Fast Autoregressive Decoding
Viaarxiv icon

InterroGate: Learning to Share, Specialize, and Prune Representations for Multi-task Learning

Add code
Feb 26, 2024
Figure 1 for InterroGate: Learning to Share, Specialize, and Prune Representations for Multi-task Learning
Figure 2 for InterroGate: Learning to Share, Specialize, and Prune Representations for Multi-task Learning
Figure 3 for InterroGate: Learning to Share, Specialize, and Prune Representations for Multi-task Learning
Figure 4 for InterroGate: Learning to Share, Specialize, and Prune Representations for Multi-task Learning
Viaarxiv icon

GPTVQ: The Blessing of Dimensionality for LLM Quantization

Add code
Feb 23, 2024
Figure 1 for GPTVQ: The Blessing of Dimensionality for LLM Quantization
Figure 2 for GPTVQ: The Blessing of Dimensionality for LLM Quantization
Figure 3 for GPTVQ: The Blessing of Dimensionality for LLM Quantization
Figure 4 for GPTVQ: The Blessing of Dimensionality for LLM Quantization
Viaarxiv icon

The LLM Surgeon

Add code
Dec 28, 2023
Viaarxiv icon

VeRA: Vector-based Random Matrix Adaptation

Add code
Oct 17, 2023
Figure 1 for VeRA: Vector-based Random Matrix Adaptation
Figure 2 for VeRA: Vector-based Random Matrix Adaptation
Figure 3 for VeRA: Vector-based Random Matrix Adaptation
Figure 4 for VeRA: Vector-based Random Matrix Adaptation
Viaarxiv icon

Scalarization for Multi-Task and Multi-Domain Learning at Scale

Add code
Oct 13, 2023
Figure 1 for Scalarization for Multi-Task and Multi-Domain Learning at Scale
Figure 2 for Scalarization for Multi-Task and Multi-Domain Learning at Scale
Figure 3 for Scalarization for Multi-Task and Multi-Domain Learning at Scale
Figure 4 for Scalarization for Multi-Task and Multi-Domain Learning at Scale
Viaarxiv icon

Efficient Neural PDE-Solvers using Quantization Aware Training

Add code
Aug 14, 2023
Figure 1 for Efficient Neural PDE-Solvers using Quantization Aware Training
Figure 2 for Efficient Neural PDE-Solvers using Quantization Aware Training
Figure 3 for Efficient Neural PDE-Solvers using Quantization Aware Training
Figure 4 for Efficient Neural PDE-Solvers using Quantization Aware Training
Viaarxiv icon

QBitOpt: Fast and Accurate Bitwidth Reallocation during Training

Add code
Jul 10, 2023
Figure 1 for QBitOpt: Fast and Accurate Bitwidth Reallocation during Training
Figure 2 for QBitOpt: Fast and Accurate Bitwidth Reallocation during Training
Figure 3 for QBitOpt: Fast and Accurate Bitwidth Reallocation during Training
Figure 4 for QBitOpt: Fast and Accurate Bitwidth Reallocation during Training
Viaarxiv icon