Alert button

"Information": models, code, and papers
Alert button

M^2UNet: MetaFormer Multi-scale Upsampling Network for Polyp Segmentation

Jun 14, 2023
Quoc-Huy Trinh, Nhat-Tan Bui, Trong-Hieu Nguyen Mau, Minh-Van Nguyen, Hai-Minh Phan, Minh-Triet Tran, Hai-Dang Nguyen

Figure 1 for M^2UNet: MetaFormer Multi-scale Upsampling Network for Polyp Segmentation
Figure 2 for M^2UNet: MetaFormer Multi-scale Upsampling Network for Polyp Segmentation
Figure 3 for M^2UNet: MetaFormer Multi-scale Upsampling Network for Polyp Segmentation
Figure 4 for M^2UNet: MetaFormer Multi-scale Upsampling Network for Polyp Segmentation
Viaarxiv icon

A Weighted Autoencoder-Based Approach to Downlink NOMA Constellation Design

Jun 23, 2023
Vukan Ninkovic, Dejan Vukobratovic, Adriano Pastore, Carles Anton-Haro

Figure 1 for A Weighted Autoencoder-Based Approach to Downlink NOMA Constellation Design
Figure 2 for A Weighted Autoencoder-Based Approach to Downlink NOMA Constellation Design
Figure 3 for A Weighted Autoencoder-Based Approach to Downlink NOMA Constellation Design
Figure 4 for A Weighted Autoencoder-Based Approach to Downlink NOMA Constellation Design
Viaarxiv icon

Prompt-based Extraction of Social Determinants of Health Using Few-shot Learning

Jun 12, 2023
Giridhar Kaushik Ramachandran, Yujuan Fu, Bin Han, Kevin Lybarger, Nicholas J Dobbins, Özlem Uzuner, Meliha Yetisgen

Figure 1 for Prompt-based Extraction of Social Determinants of Health Using Few-shot Learning
Figure 2 for Prompt-based Extraction of Social Determinants of Health Using Few-shot Learning
Figure 3 for Prompt-based Extraction of Social Determinants of Health Using Few-shot Learning
Figure 4 for Prompt-based Extraction of Social Determinants of Health Using Few-shot Learning
Viaarxiv icon

EriBERTa: A Bilingual Pre-Trained Language Model for Clinical Natural Language Processing

Jun 12, 2023
Iker de la Iglesia, Aitziber Atutxa, Koldo Gojenola, Ander Barrena

Figure 1 for EriBERTa: A Bilingual Pre-Trained Language Model for Clinical Natural Language Processing
Figure 2 for EriBERTa: A Bilingual Pre-Trained Language Model for Clinical Natural Language Processing
Figure 3 for EriBERTa: A Bilingual Pre-Trained Language Model for Clinical Natural Language Processing
Figure 4 for EriBERTa: A Bilingual Pre-Trained Language Model for Clinical Natural Language Processing
Viaarxiv icon

A Weakly Supervised Approach to Emotion-change Prediction and Improved Mood Inference

Jun 12, 2023
Soujanya Narayana, Ibrahim Radwan, Ravikiran Parameshwara, Iman Abbasnejad, Akshay Asthana, Ramanathan Subramanian, Roland Goecke

Figure 1 for A Weakly Supervised Approach to Emotion-change Prediction and Improved Mood Inference
Figure 2 for A Weakly Supervised Approach to Emotion-change Prediction and Improved Mood Inference
Figure 3 for A Weakly Supervised Approach to Emotion-change Prediction and Improved Mood Inference
Figure 4 for A Weakly Supervised Approach to Emotion-change Prediction and Improved Mood Inference
Viaarxiv icon

Data-Free Quantization via Mixed-Precision Compensation without Fine-Tuning

Jul 02, 2023
Jun Chen, Shipeng Bai, Tianxin Huang, Mengmeng Wang, Guanzhong Tian, Yong Liu

Figure 1 for Data-Free Quantization via Mixed-Precision Compensation without Fine-Tuning
Figure 2 for Data-Free Quantization via Mixed-Precision Compensation without Fine-Tuning
Figure 3 for Data-Free Quantization via Mixed-Precision Compensation without Fine-Tuning
Figure 4 for Data-Free Quantization via Mixed-Precision Compensation without Fine-Tuning
Viaarxiv icon

What Makes ImageNet Look Unlike LAION

Add code
Bookmark button
Alert button
Jun 27, 2023
Ali Shirali, Moritz Hardt

Figure 1 for What Makes ImageNet Look Unlike LAION
Figure 2 for What Makes ImageNet Look Unlike LAION
Figure 3 for What Makes ImageNet Look Unlike LAION
Figure 4 for What Makes ImageNet Look Unlike LAION
Viaarxiv icon

Cross-Language Speech Emotion Recognition Using Multimodal Dual Attention Transformers

Jun 27, 2023
Syed Aun Muhammad Zaidi, Siddique Latif, Junaid Qadir

Figure 1 for Cross-Language Speech Emotion Recognition Using Multimodal Dual Attention Transformers
Figure 2 for Cross-Language Speech Emotion Recognition Using Multimodal Dual Attention Transformers
Figure 3 for Cross-Language Speech Emotion Recognition Using Multimodal Dual Attention Transformers
Figure 4 for Cross-Language Speech Emotion Recognition Using Multimodal Dual Attention Transformers
Viaarxiv icon

Rethinking Closed-loop Training for Autonomous Driving

Add code
Bookmark button
Alert button
Jun 27, 2023
Chris Zhang, Runsheng Guo, Wenyuan Zeng, Yuwen Xiong, Binbin Dai, Rui Hu, Mengye Ren, Raquel Urtasun

Viaarxiv icon

Effective resistance in metric spaces

Jun 27, 2023
Robi Bhattacharjee, Alexander Cloninger, Yoav Freund, Andreas Oslandsbotn

Viaarxiv icon