Alert button

"Text": models, code, and papers
Alert button

eVAE: Evolutionary Variational Autoencoder

Jan 01, 2023
Zhangkai Wu, Longbing Cao, Lei Qi

Figure 1 for eVAE: Evolutionary Variational Autoencoder
Figure 2 for eVAE: Evolutionary Variational Autoencoder
Figure 3 for eVAE: Evolutionary Variational Autoencoder
Figure 4 for eVAE: Evolutionary Variational Autoencoder
Viaarxiv icon

Inference of Media Bias and Content Quality Using Natural-Language Processing

Dec 01, 2022
Zehan Chao, Denali Molitor, Deanna Needell, Mason A. Porter

Figure 1 for Inference of Media Bias and Content Quality Using Natural-Language Processing
Figure 2 for Inference of Media Bias and Content Quality Using Natural-Language Processing
Figure 3 for Inference of Media Bias and Content Quality Using Natural-Language Processing
Figure 4 for Inference of Media Bias and Content Quality Using Natural-Language Processing
Viaarxiv icon

Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages

May 02, 2022
Felix Wu, Kwangyoun Kim, Shinji Watanabe, Kyu Han, Ryan McDonald, Kilian Q. Weinberger, Yoav Artzi

Figure 1 for Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages
Figure 2 for Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages
Figure 3 for Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages
Figure 4 for Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages
Viaarxiv icon

A Health Focused Text Classification Tool (HFTCT)

Oct 23, 2022
Baadr Suleman M Alwheepy, Leandros Maglaras, Nick Ayres

Figure 1 for A Health Focused Text Classification Tool (HFTCT)
Figure 2 for A Health Focused Text Classification Tool (HFTCT)
Figure 3 for A Health Focused Text Classification Tool (HFTCT)
Figure 4 for A Health Focused Text Classification Tool (HFTCT)
Viaarxiv icon

Deep Bidirectional Language-Knowledge Graph Pretraining

Oct 19, 2022
Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang, Christopher D Manning, Percy Liang, Jure Leskovec

Figure 1 for Deep Bidirectional Language-Knowledge Graph Pretraining
Figure 2 for Deep Bidirectional Language-Knowledge Graph Pretraining
Figure 3 for Deep Bidirectional Language-Knowledge Graph Pretraining
Figure 4 for Deep Bidirectional Language-Knowledge Graph Pretraining
Viaarxiv icon

Training Integer-Only Deep Recurrent Neural Networks

Dec 22, 2022
Vahid Partovi Nia, Eyyüb Sari, Vanessa Courville, Masoud Asgharian

Figure 1 for Training Integer-Only Deep Recurrent Neural Networks
Figure 2 for Training Integer-Only Deep Recurrent Neural Networks
Figure 3 for Training Integer-Only Deep Recurrent Neural Networks
Figure 4 for Training Integer-Only Deep Recurrent Neural Networks
Viaarxiv icon

HMM-based data augmentation for E2E systems for building conversational speech synthesis systems

Dec 22, 2022
Ishika Gupta, Anusha Prakash, Jom Kuriakose, Hema A. Murthy

Figure 1 for HMM-based data augmentation for E2E systems for building conversational speech synthesis systems
Figure 2 for HMM-based data augmentation for E2E systems for building conversational speech synthesis systems
Figure 3 for HMM-based data augmentation for E2E systems for building conversational speech synthesis systems
Figure 4 for HMM-based data augmentation for E2E systems for building conversational speech synthesis systems
Viaarxiv icon

Text and Code Embeddings by Contrastive Pre-Training

Jan 24, 2022
Arvind Neelakantan, Tao Xu, Raul Puri, Alec Radford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, Johannes Heidecke, Pranav Shyam, Boris Power, Tyna Eloundou Nekoul, Girish Sastry, Gretchen Krueger, David Schnurr, Felipe Petroski Such, Kenny Hsu, Madeleine Thompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter Welinder, Lilian Weng

Figure 1 for Text and Code Embeddings by Contrastive Pre-Training
Figure 2 for Text and Code Embeddings by Contrastive Pre-Training
Figure 3 for Text and Code Embeddings by Contrastive Pre-Training
Figure 4 for Text and Code Embeddings by Contrastive Pre-Training
Viaarxiv icon

Improving the Robustness of Summarization Models by Detecting and Removing Input Noise

Dec 20, 2022
Kundan Krishna, Yao Zhao, Jie Ren, Balaji Lakshminarayanan, Jiaming Luo, Mohammad Saleh, Peter J. Liu

Figure 1 for Improving the Robustness of Summarization Models by Detecting and Removing Input Noise
Figure 2 for Improving the Robustness of Summarization Models by Detecting and Removing Input Noise
Figure 3 for Improving the Robustness of Summarization Models by Detecting and Removing Input Noise
Figure 4 for Improving the Robustness of Summarization Models by Detecting and Removing Input Noise
Viaarxiv icon

A Survey of Pretrained Language Models Based Text Generation

Feb 02, 2022
Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen

Figure 1 for A Survey of Pretrained Language Models Based Text Generation
Figure 2 for A Survey of Pretrained Language Models Based Text Generation
Figure 3 for A Survey of Pretrained Language Models Based Text Generation
Figure 4 for A Survey of Pretrained Language Models Based Text Generation
Viaarxiv icon