Alert button

"Text": models, code, and papers
Alert button

Cognitive Computing to Optimize IT Services

Dec 28, 2021
Abbas Raza Ali

Figure 1 for Cognitive Computing to Optimize IT Services
Figure 2 for Cognitive Computing to Optimize IT Services
Figure 3 for Cognitive Computing to Optimize IT Services
Figure 4 for Cognitive Computing to Optimize IT Services
Viaarxiv icon

Jurassic is (almost) All You Need: Few-Shot Meaning-to-Text Generation for Open-Domain Dialogue

Oct 15, 2021
Lena Reed, Cecilia Li, Angela Ramirez, Liren Wu, Marilyn Walker

Figure 1 for Jurassic is (almost) All You Need: Few-Shot Meaning-to-Text Generation for Open-Domain Dialogue
Figure 2 for Jurassic is (almost) All You Need: Few-Shot Meaning-to-Text Generation for Open-Domain Dialogue
Figure 3 for Jurassic is (almost) All You Need: Few-Shot Meaning-to-Text Generation for Open-Domain Dialogue
Figure 4 for Jurassic is (almost) All You Need: Few-Shot Meaning-to-Text Generation for Open-Domain Dialogue
Viaarxiv icon

Text Detection and Recognition in the Wild: A Review

Jun 08, 2020
Zobeir Raisi, Mohamed A. Naiel, Paul Fieguth, Steven Wardell, John Zelek

Figure 1 for Text Detection and Recognition in the Wild: A Review
Figure 2 for Text Detection and Recognition in the Wild: A Review
Figure 3 for Text Detection and Recognition in the Wild: A Review
Figure 4 for Text Detection and Recognition in the Wild: A Review
Viaarxiv icon

Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)

Apr 06, 2022
Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, Yongfeng Zhang

Figure 1 for Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)
Figure 2 for Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)
Figure 3 for Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)
Figure 4 for Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)
Viaarxiv icon

Table Pretraining: A Survey on Model Architectures, Pretraining Objectives, and Downstream Tasks

Jan 24, 2022
Haoyu Dong, Zhoujun Cheng, Xinyi He, Mengyu Zhou, Anda Zhou, Fan Zhou, Ao Liu, Shi Han, Dongmei Zhang

Figure 1 for Table Pretraining: A Survey on Model Architectures, Pretraining Objectives, and Downstream Tasks
Figure 2 for Table Pretraining: A Survey on Model Architectures, Pretraining Objectives, and Downstream Tasks
Figure 3 for Table Pretraining: A Survey on Model Architectures, Pretraining Objectives, and Downstream Tasks
Figure 4 for Table Pretraining: A Survey on Model Architectures, Pretraining Objectives, and Downstream Tasks
Viaarxiv icon

Consensus-Aware Visual-Semantic Embedding for Image-Text Matching

Jul 17, 2020
Haoran Wang, Ying Zhang, Zhong Ji, Yanwei Pang, Lin Ma

Figure 1 for Consensus-Aware Visual-Semantic Embedding for Image-Text Matching
Figure 2 for Consensus-Aware Visual-Semantic Embedding for Image-Text Matching
Figure 3 for Consensus-Aware Visual-Semantic Embedding for Image-Text Matching
Figure 4 for Consensus-Aware Visual-Semantic Embedding for Image-Text Matching
Viaarxiv icon

The Tree Loss: Improving Generalization with Many Classes

Apr 16, 2022
Yujie Wang, Mike Izbicki

Figure 1 for The Tree Loss: Improving Generalization with Many Classes
Figure 2 for The Tree Loss: Improving Generalization with Many Classes
Figure 3 for The Tree Loss: Improving Generalization with Many Classes
Figure 4 for The Tree Loss: Improving Generalization with Many Classes
Viaarxiv icon

DivEMT: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages

May 24, 2022
Gabriele Sarti, Arianna Bisazza, Ana Guerberof Arenas, Antonio Toral

Figure 1 for DivEMT: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages
Figure 2 for DivEMT: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages
Figure 3 for DivEMT: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages
Figure 4 for DivEMT: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages
Viaarxiv icon

ETMS@IITKGP at SemEval-2022 Task 10: Structured Sentiment Analysis Using A Generative Approach

May 01, 2022
Raghav R, Adarsh Vemali, Rajdeep Mukherjee

Figure 1 for ETMS@IITKGP at SemEval-2022 Task 10: Structured Sentiment Analysis Using A Generative Approach
Figure 2 for ETMS@IITKGP at SemEval-2022 Task 10: Structured Sentiment Analysis Using A Generative Approach
Figure 3 for ETMS@IITKGP at SemEval-2022 Task 10: Structured Sentiment Analysis Using A Generative Approach
Figure 4 for ETMS@IITKGP at SemEval-2022 Task 10: Structured Sentiment Analysis Using A Generative Approach
Viaarxiv icon

Does the Order of Training Samples Matter? Improving Neural Data-to-Text Generation with Curriculum Learning

Feb 06, 2021
Ernie Chang, Hui-Syuan Yeh, Vera Demberg

Figure 1 for Does the Order of Training Samples Matter? Improving Neural Data-to-Text Generation with Curriculum Learning
Figure 2 for Does the Order of Training Samples Matter? Improving Neural Data-to-Text Generation with Curriculum Learning
Figure 3 for Does the Order of Training Samples Matter? Improving Neural Data-to-Text Generation with Curriculum Learning
Figure 4 for Does the Order of Training Samples Matter? Improving Neural Data-to-Text Generation with Curriculum Learning
Viaarxiv icon