Alert button
Picture for Sourav Dutta

Sourav Dutta

Alert button

AdaSent: Efficient Domain-Adapted Sentence Embeddings for Few-Shot Classification

Add code
Bookmark button
Alert button
Nov 01, 2023
Yongxin Huang, Kexin Wang, Sourav Dutta, Raj Nath Patel, Goran Glavaš, Iryna Gurevych

Viaarxiv icon

Gradient Sparsification For Masked Fine-Tuning of Transformers

Add code
Bookmark button
Alert button
Jul 19, 2023
James O' Neill, Sourav Dutta

Figure 1 for Gradient Sparsification For Masked Fine-Tuning of Transformers
Figure 2 for Gradient Sparsification For Masked Fine-Tuning of Transformers
Figure 3 for Gradient Sparsification For Masked Fine-Tuning of Transformers
Figure 4 for Gradient Sparsification For Masked Fine-Tuning of Transformers
Viaarxiv icon

Attention over pre-trained Sentence Embeddings for Long Document Classification

Add code
Bookmark button
Alert button
Jul 18, 2023
Amine Abdaoui, Sourav Dutta

Figure 1 for Attention over pre-trained Sentence Embeddings for Long Document Classification
Figure 2 for Attention over pre-trained Sentence Embeddings for Long Document Classification
Figure 3 for Attention over pre-trained Sentence Embeddings for Long Document Classification
Figure 4 for Attention over pre-trained Sentence Embeddings for Long Document Classification
Viaarxiv icon

AI-assisted Improved Service Provisioning for Low-latency XR over 5G NR

Add code
Bookmark button
Alert button
Jul 18, 2023
Moyukh Laha, Dibbendu Roy, Sourav Dutta, Goutam Das

Figure 1 for AI-assisted Improved Service Provisioning for Low-latency XR over 5G NR
Figure 2 for AI-assisted Improved Service Provisioning for Low-latency XR over 5G NR
Figure 3 for AI-assisted Improved Service Provisioning for Low-latency XR over 5G NR
Figure 4 for AI-assisted Improved Service Provisioning for Low-latency XR over 5G NR
Viaarxiv icon

Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models

Add code
Bookmark button
Alert button
Jul 12, 2023
James O' Neill, Sourav Dutta

Figure 1 for Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models
Figure 2 for Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models
Figure 3 for Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models
Figure 4 for Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models
Viaarxiv icon

AX-MABSA: A Framework for Extremely Weakly Supervised Multi-label Aspect Based Sentiment Analysis

Add code
Bookmark button
Alert button
Nov 07, 2022
Sabyasachi Kamila, Walid Magdy, Sourav Dutta, MingXue Wang

Figure 1 for AX-MABSA: A Framework for Extremely Weakly Supervised Multi-label Aspect Based Sentiment Analysis
Figure 2 for AX-MABSA: A Framework for Extremely Weakly Supervised Multi-label Aspect Based Sentiment Analysis
Figure 3 for AX-MABSA: A Framework for Extremely Weakly Supervised Multi-label Aspect Based Sentiment Analysis
Figure 4 for AX-MABSA: A Framework for Extremely Weakly Supervised Multi-label Aspect Based Sentiment Analysis
Viaarxiv icon

ACO based Adaptive RBFN Control for Robot Manipulators

Add code
Bookmark button
Alert button
Aug 19, 2022
Sheheeda Manakkadu, Sourav Dutta

Figure 1 for ACO based Adaptive RBFN Control for Robot Manipulators
Figure 2 for ACO based Adaptive RBFN Control for Robot Manipulators
Viaarxiv icon

Aligned Weight Regularizers for Pruning Pretrained Neural Networks

Add code
Bookmark button
Alert button
Apr 05, 2022
James O' Neill, Sourav Dutta, Haytham Assem

Figure 1 for Aligned Weight Regularizers for Pruning Pretrained Neural Networks
Figure 2 for Aligned Weight Regularizers for Pruning Pretrained Neural Networks
Figure 3 for Aligned Weight Regularizers for Pruning Pretrained Neural Networks
Figure 4 for Aligned Weight Regularizers for Pruning Pretrained Neural Networks
Viaarxiv icon

Deep Neural Compression Via Concurrent Pruning and Self-Distillation

Add code
Bookmark button
Alert button
Sep 30, 2021
James O' Neill, Sourav Dutta, Haytham Assem

Figure 1 for Deep Neural Compression Via Concurrent Pruning and Self-Distillation
Figure 2 for Deep Neural Compression Via Concurrent Pruning and Self-Distillation
Figure 3 for Deep Neural Compression Via Concurrent Pruning and Self-Distillation
Figure 4 for Deep Neural Compression Via Concurrent Pruning and Self-Distillation
Viaarxiv icon

EdinSaar@WMT21: North-Germanic Low-Resource Multilingual NMT

Add code
Bookmark button
Alert button
Sep 29, 2021
Svetlana Tchistiakova, Jesujoba Alabi, Koel Dutta Chowdhury, Sourav Dutta, Dana Ruiter

Figure 1 for EdinSaar@WMT21: North-Germanic Low-Resource Multilingual NMT
Figure 2 for EdinSaar@WMT21: North-Germanic Low-Resource Multilingual NMT
Figure 3 for EdinSaar@WMT21: North-Germanic Low-Resource Multilingual NMT
Viaarxiv icon