Alert button

"Text": models, code, and papers
Alert button

Towards Coding Social Science Datasets with Language Models

Jun 03, 2023
Christopher Michael Rytting, Taylor Sorensen, Lisa Argyle, Ethan Busby, Nancy Fulda, Joshua Gubler, David Wingate

Figure 1 for Towards Coding Social Science Datasets with Language Models
Figure 2 for Towards Coding Social Science Datasets with Language Models
Figure 3 for Towards Coding Social Science Datasets with Language Models
Figure 4 for Towards Coding Social Science Datasets with Language Models
Viaarxiv icon

TreeMAN: Tree-enhanced Multimodal Attention Network for ICD Coding

May 29, 2023
Zichen Liu, Xuyuan Liu, Yanlong Wen, Guoqing Zhao, Fen Xia, Xiaojie Yuan

Figure 1 for TreeMAN: Tree-enhanced Multimodal Attention Network for ICD Coding
Figure 2 for TreeMAN: Tree-enhanced Multimodal Attention Network for ICD Coding
Figure 3 for TreeMAN: Tree-enhanced Multimodal Attention Network for ICD Coding
Figure 4 for TreeMAN: Tree-enhanced Multimodal Attention Network for ICD Coding
Viaarxiv icon

Extrinsic Factors Affecting the Accuracy of Biomedical NER

May 29, 2023
Zhiyi Li, Shengjie Zhang, Yujie Song, Jungyeul Park

Figure 1 for Extrinsic Factors Affecting the Accuracy of Biomedical NER
Figure 2 for Extrinsic Factors Affecting the Accuracy of Biomedical NER
Viaarxiv icon

Diversifying Joint Vision-Language Tokenization Learning

Jun 06, 2023
Vardaan Pahuja, AJ Piergiovanni, Anelia Angelova

Figure 1 for Diversifying Joint Vision-Language Tokenization Learning
Figure 2 for Diversifying Joint Vision-Language Tokenization Learning
Figure 3 for Diversifying Joint Vision-Language Tokenization Learning
Figure 4 for Diversifying Joint Vision-Language Tokenization Learning
Viaarxiv icon

An Analysis of Reader Engagement in Literary Fiction through Eye Tracking and Linguistic Features

Jun 06, 2023
Rose Neis, Karin de Langis, Zae Myung Kim, Dongyeop Kang

Figure 1 for An Analysis of Reader Engagement in Literary Fiction through Eye Tracking and Linguistic Features
Figure 2 for An Analysis of Reader Engagement in Literary Fiction through Eye Tracking and Linguistic Features
Figure 3 for An Analysis of Reader Engagement in Literary Fiction through Eye Tracking and Linguistic Features
Figure 4 for An Analysis of Reader Engagement in Literary Fiction through Eye Tracking and Linguistic Features
Viaarxiv icon

Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation

May 25, 2023
Shilin Yan, Renrui Zhang, Ziyu Guo, Wenchao Chen, Wei Zhang, Hongyang Li, Yu Qiao, Zhongjiang He, Peng Gao

Figure 1 for Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation
Figure 2 for Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation
Figure 3 for Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation
Figure 4 for Referred by Multi-Modality: A Unified Temporal Transformer for Video Object Segmentation
Viaarxiv icon

Fine-Grained Product Classification on Leaflet Advertisements

May 05, 2023
Daniel Ladwig, Bianca Lamm, Janis Keuper

Figure 1 for Fine-Grained Product Classification on Leaflet Advertisements
Figure 2 for Fine-Grained Product Classification on Leaflet Advertisements
Figure 3 for Fine-Grained Product Classification on Leaflet Advertisements
Figure 4 for Fine-Grained Product Classification on Leaflet Advertisements
Viaarxiv icon

RISCLIP: Referring Image Segmentation Framework using CLIP

Jun 14, 2023
Seoyeon Kim, Minguk Kang, Jaesik Park

Figure 1 for RISCLIP: Referring Image Segmentation Framework using CLIP
Figure 2 for RISCLIP: Referring Image Segmentation Framework using CLIP
Figure 3 for RISCLIP: Referring Image Segmentation Framework using CLIP
Figure 4 for RISCLIP: Referring Image Segmentation Framework using CLIP
Viaarxiv icon

World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Language Models

Jun 14, 2023
Ziqiao Ma, Jiayi Pan, Joyce Chai

Figure 1 for World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Language Models
Figure 2 for World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Language Models
Figure 3 for World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Language Models
Figure 4 for World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Language Models
Viaarxiv icon

Zambezi Voice: A Multilingual Speech Corpus for Zambian Languages

Jun 13, 2023
Claytone Sikasote, Kalinda Siaminwe, Stanly Mwape, Bangiwe Zulu, Mofya Phiri, Martin Phiri, David Zulu, Mayumbo Nyirenda, Antonios Anastasopoulos

Figure 1 for Zambezi Voice: A Multilingual Speech Corpus for Zambian Languages
Figure 2 for Zambezi Voice: A Multilingual Speech Corpus for Zambian Languages
Figure 3 for Zambezi Voice: A Multilingual Speech Corpus for Zambian Languages
Figure 4 for Zambezi Voice: A Multilingual Speech Corpus for Zambian Languages
Viaarxiv icon