Picture for Mohammed Nowaz Rabbani Chowdhury

Mohammed Nowaz Rabbani Chowdhury

Evaluating Fine-Tuned LLM Model For Medical Transcription With Small Low-Resource Languages Validated Dataset

Add code
Mar 25, 2026
Viaarxiv icon

Robust Heterogeneous Analog-Digital Computing for Mixture-of-Experts Models with Theoretical Generalization Guarantees

Add code
Mar 03, 2026
Viaarxiv icon

A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts

Add code
May 28, 2024
Figure 1 for A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts
Figure 2 for A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts
Figure 3 for A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts
Figure 4 for A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts
Viaarxiv icon

Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks

Add code
Jun 07, 2023
Figure 1 for Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks
Figure 2 for Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks
Figure 3 for Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks
Figure 4 for Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks
Viaarxiv icon