Picture for Vikas Chandra

Vikas Chandra

Enhance audio generation controllability through representation similarity regularization

Add code
Sep 15, 2023
Viaarxiv icon

Stack-and-Delay: a new codebook pattern for music generation

Add code
Sep 15, 2023
Viaarxiv icon

TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models

Add code
Sep 05, 2023
Figure 1 for TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models
Figure 2 for TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models
Figure 3 for TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models
Figure 4 for TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models
Viaarxiv icon

Revisiting Sample Size Determination in Natural Language Understanding

Add code
Jul 01, 2023
Figure 1 for Revisiting Sample Size Determination in Natural Language Understanding
Figure 2 for Revisiting Sample Size Determination in Natural Language Understanding
Figure 3 for Revisiting Sample Size Determination in Natural Language Understanding
Figure 4 for Revisiting Sample Size Determination in Natural Language Understanding
Viaarxiv icon

Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts

Add code
Jun 08, 2023
Figure 1 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Figure 2 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Figure 3 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Figure 4 for Mixture-of-Supernets: Improving Weight-Sharing Supernet Training with Architecture-Routed Mixture-of-Experts
Viaarxiv icon

LLM-QAT: Data-Free Quantization Aware Training for Large Language Models

Add code
May 29, 2023
Figure 1 for LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Figure 2 for LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Figure 3 for LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Figure 4 for LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Viaarxiv icon

PathFusion: Path-consistent Lidar-Camera Deep Feature Fusion

Add code
Dec 12, 2022
Viaarxiv icon

SDRM3: A Dynamic Scheduler for Dynamic Real-time Multi-model ML Workloads

Add code
Dec 07, 2022
Figure 1 for SDRM3: A Dynamic Scheduler for Dynamic Real-time Multi-model ML Workloads
Figure 2 for SDRM3: A Dynamic Scheduler for Dynamic Real-time Multi-model ML Workloads
Figure 3 for SDRM3: A Dynamic Scheduler for Dynamic Real-time Multi-model ML Workloads
Figure 4 for SDRM3: A Dynamic Scheduler for Dynamic Real-time Multi-model ML Workloads
Viaarxiv icon

Fast Point Cloud Generation with Straight Flows

Add code
Dec 04, 2022
Figure 1 for Fast Point Cloud Generation with Straight Flows
Figure 2 for Fast Point Cloud Generation with Straight Flows
Figure 3 for Fast Point Cloud Generation with Straight Flows
Figure 4 for Fast Point Cloud Generation with Straight Flows
Viaarxiv icon

XRBench: An Extended Reality (XR) Machine Learning Benchmark Suite for the Metaverse

Add code
Nov 16, 2022
Figure 1 for XRBench: An Extended Reality (XR) Machine Learning Benchmark Suite for the Metaverse
Figure 2 for XRBench: An Extended Reality (XR) Machine Learning Benchmark Suite for the Metaverse
Figure 3 for XRBench: An Extended Reality (XR) Machine Learning Benchmark Suite for the Metaverse
Figure 4 for XRBench: An Extended Reality (XR) Machine Learning Benchmark Suite for the Metaverse
Viaarxiv icon