Picture for Marzieh Tahaei

Marzieh Tahaei

QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning

Add code
Feb 16, 2024
Figure 1 for QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning
Figure 2 for QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning
Figure 3 for QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning
Figure 4 for QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning
Viaarxiv icon

Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)

Add code
Sep 16, 2023
Figure 1 for Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Figure 2 for Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Figure 3 for Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Figure 4 for Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Viaarxiv icon

SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks

Add code
Sep 01, 2023
Figure 1 for SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks
Figure 2 for SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks
Figure 3 for SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks
Figure 4 for SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks
Viaarxiv icon

On the Transferability of Whisper-based Representations for "In-the-Wild" Cross-Task Downstream Speech Applications

Add code
May 23, 2023
Figure 1 for On the Transferability of Whisper-based Representations for "In-the-Wild" Cross-Task Downstream Speech Applications
Figure 2 for On the Transferability of Whisper-based Representations for "In-the-Wild" Cross-Task Downstream Speech Applications
Figure 3 for On the Transferability of Whisper-based Representations for "In-the-Wild" Cross-Task Downstream Speech Applications
Viaarxiv icon

KronA: Parameter Efficient Tuning with Kronecker Adapter

Add code
Dec 20, 2022
Figure 1 for KronA: Parameter Efficient Tuning with Kronecker Adapter
Figure 2 for KronA: Parameter Efficient Tuning with Kronecker Adapter
Figure 3 for KronA: Parameter Efficient Tuning with Kronecker Adapter
Figure 4 for KronA: Parameter Efficient Tuning with Kronecker Adapter
Viaarxiv icon

Kronecker Decomposition for GPT Compression

Add code
Oct 15, 2021
Figure 1 for Kronecker Decomposition for GPT Compression
Figure 2 for Kronecker Decomposition for GPT Compression
Figure 3 for Kronecker Decomposition for GPT Compression
Figure 4 for Kronecker Decomposition for GPT Compression
Viaarxiv icon

FoCL: Feature-Oriented Continual Learning for Generative Models

Add code
Mar 09, 2020
Figure 1 for FoCL: Feature-Oriented Continual Learning for Generative Models
Figure 2 for FoCL: Feature-Oriented Continual Learning for Generative Models
Figure 3 for FoCL: Feature-Oriented Continual Learning for Generative Models
Figure 4 for FoCL: Feature-Oriented Continual Learning for Generative Models
Viaarxiv icon