Alert button

"Text": models, code, and papers
Alert button

What Do Patients Say About Their Disease Symptoms? Deep Multilabel Text Classification With Human-in-the-Loop Curation for Automatic Labeling of Patient Self Reports of Problems

May 08, 2023
Lakshmi Arbatti, Abhishek Hosamath, Vikram Ramanarayanan, Ira Shoulson

Figure 1 for What Do Patients Say About Their Disease Symptoms? Deep Multilabel Text Classification With Human-in-the-Loop Curation for Automatic Labeling of Patient Self Reports of Problems
Figure 2 for What Do Patients Say About Their Disease Symptoms? Deep Multilabel Text Classification With Human-in-the-Loop Curation for Automatic Labeling of Patient Self Reports of Problems
Figure 3 for What Do Patients Say About Their Disease Symptoms? Deep Multilabel Text Classification With Human-in-the-Loop Curation for Automatic Labeling of Patient Self Reports of Problems
Figure 4 for What Do Patients Say About Their Disease Symptoms? Deep Multilabel Text Classification With Human-in-the-Loop Curation for Automatic Labeling of Patient Self Reports of Problems
Viaarxiv icon

Improving visual image reconstruction from human brain activity using latent diffusion models via multiple decoded inputs

Jun 20, 2023
Yu Takagi, Shinji Nishimoto

Figure 1 for Improving visual image reconstruction from human brain activity using latent diffusion models via multiple decoded inputs
Figure 2 for Improving visual image reconstruction from human brain activity using latent diffusion models via multiple decoded inputs
Figure 3 for Improving visual image reconstruction from human brain activity using latent diffusion models via multiple decoded inputs
Figure 4 for Improving visual image reconstruction from human brain activity using latent diffusion models via multiple decoded inputs
Viaarxiv icon

Exploring Diverse In-Context Configurations for Image Captioning

May 26, 2023
Xu Yang, Yongliang Wu, Mingzhuo Yang, Haokun Chen, Xin Geng

Figure 1 for Exploring Diverse In-Context Configurations for Image Captioning
Figure 2 for Exploring Diverse In-Context Configurations for Image Captioning
Figure 3 for Exploring Diverse In-Context Configurations for Image Captioning
Figure 4 for Exploring Diverse In-Context Configurations for Image Captioning
Viaarxiv icon

DesCo: Learning Object Recognition with Rich Language Descriptions

Jun 24, 2023
Liunian Harold Li, Zi-Yi Dou, Nanyun Peng, Kai-Wei Chang

Figure 1 for DesCo: Learning Object Recognition with Rich Language Descriptions
Figure 2 for DesCo: Learning Object Recognition with Rich Language Descriptions
Figure 3 for DesCo: Learning Object Recognition with Rich Language Descriptions
Figure 4 for DesCo: Learning Object Recognition with Rich Language Descriptions
Viaarxiv icon

X&Fuse: Fusing Visual Information in Text-to-Image Generation

Mar 02, 2023
Yuval Kirstain, Omer Levy, Adam Polyak

Figure 1 for X&Fuse: Fusing Visual Information in Text-to-Image Generation
Figure 2 for X&Fuse: Fusing Visual Information in Text-to-Image Generation
Figure 3 for X&Fuse: Fusing Visual Information in Text-to-Image Generation
Figure 4 for X&Fuse: Fusing Visual Information in Text-to-Image Generation
Viaarxiv icon

ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue

May 23, 2023
Haoqin Tu, Yitong Li, Fei Mi, Zhongliang Yang

Figure 1 for ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue
Figure 2 for ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue
Figure 3 for ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue
Figure 4 for ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue
Viaarxiv icon

Towards Zero-shot Relation Extraction in Web Mining: A Multimodal Approach with Relative XML Path

May 23, 2023
Zilong Wang, Jingbo Shang

Figure 1 for Towards Zero-shot Relation Extraction in Web Mining: A Multimodal Approach with Relative XML Path
Figure 2 for Towards Zero-shot Relation Extraction in Web Mining: A Multimodal Approach with Relative XML Path
Figure 3 for Towards Zero-shot Relation Extraction in Web Mining: A Multimodal Approach with Relative XML Path
Figure 4 for Towards Zero-shot Relation Extraction in Web Mining: A Multimodal Approach with Relative XML Path
Viaarxiv icon

Enhancing Vision-Language Pre-Training with Jointly Learned Questioner and Dense Captioner

May 19, 2023
Zikang Liu, Sihan Chen, Longteng Guo, Handong Li, Xingjian He, Jing Liu

Figure 1 for Enhancing Vision-Language Pre-Training with Jointly Learned Questioner and Dense Captioner
Figure 2 for Enhancing Vision-Language Pre-Training with Jointly Learned Questioner and Dense Captioner
Figure 3 for Enhancing Vision-Language Pre-Training with Jointly Learned Questioner and Dense Captioner
Figure 4 for Enhancing Vision-Language Pre-Training with Jointly Learned Questioner and Dense Captioner
Viaarxiv icon

Learning Universal Policies via Text-Guided Video Generation

Feb 02, 2023
Yilun Du, Mengjiao Yang, Bo Dai, Hanjun Dai, Ofir Nachum, Joshua B. Tenenbaum, Dale Schuurmans, Pieter Abbeel

Figure 1 for Learning Universal Policies via Text-Guided Video Generation
Figure 2 for Learning Universal Policies via Text-Guided Video Generation
Figure 3 for Learning Universal Policies via Text-Guided Video Generation
Figure 4 for Learning Universal Policies via Text-Guided Video Generation
Viaarxiv icon

How learners produce data from text in classifying clickbait

Jan 28, 2023
Nicholas J. Horton, Jie Chao, Phebe Palmer, William Finzer

Viaarxiv icon