Picture for Urmish Thakker

Urmish Thakker

Efficiently Adapting Pretrained Language Models To New Languages

Add code
Nov 09, 2023
Figure 1 for Efficiently Adapting Pretrained Language Models To New Languages
Figure 2 for Efficiently Adapting Pretrained Language Models To New Languages
Figure 3 for Efficiently Adapting Pretrained Language Models To New Languages
Figure 4 for Efficiently Adapting Pretrained Language Models To New Languages
Viaarxiv icon

Training Large Language Models Efficiently with Sparsity and Dataflow

Add code
Apr 11, 2023
Viaarxiv icon

BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

Add code
Nov 09, 2022
Viaarxiv icon

PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts

Add code
Feb 02, 2022
Figure 1 for PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
Figure 2 for PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
Figure 3 for PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
Figure 4 for PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
Viaarxiv icon

Multitask Prompted Training Enables Zero-Shot Task Generalization

Add code
Oct 15, 2021
Figure 1 for Multitask Prompted Training Enables Zero-Shot Task Generalization
Figure 2 for Multitask Prompted Training Enables Zero-Shot Task Generalization
Figure 3 for Multitask Prompted Training Enables Zero-Shot Task Generalization
Figure 4 for Multitask Prompted Training Enables Zero-Shot Task Generalization
Viaarxiv icon

MLPerf Tiny Benchmark

Add code
Jun 28, 2021
Figure 1 for MLPerf Tiny Benchmark
Figure 2 for MLPerf Tiny Benchmark
Figure 3 for MLPerf Tiny Benchmark
Figure 4 for MLPerf Tiny Benchmark
Viaarxiv icon

Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices

Add code
Feb 14, 2021
Figure 1 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 2 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 3 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 4 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Viaarxiv icon

MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers

Add code
Oct 25, 2020
Figure 1 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 2 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 3 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 4 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Viaarxiv icon

Rank and run-time aware compression of NLP Applications

Add code
Oct 06, 2020
Figure 1 for Rank and run-time aware compression of NLP Applications
Figure 2 for Rank and run-time aware compression of NLP Applications
Figure 3 for Rank and run-time aware compression of NLP Applications
Viaarxiv icon

Benchmarking TinyML Systems: Challenges and Direction

Add code
Mar 10, 2020
Figure 1 for Benchmarking TinyML Systems: Challenges and Direction
Figure 2 for Benchmarking TinyML Systems: Challenges and Direction
Figure 3 for Benchmarking TinyML Systems: Challenges and Direction
Viaarxiv icon