Text Infilling


Text Infilling is the task of predicting missing spans of text which are consistent with the preceding and subsequent text. Text Infilling is a generalization of the cloze task—cloze historically refers to infilling individual words.

UniVoice: Unifying Autoregressive ASR and Flow-Matching based TTS with Large Language Models

Add code
Oct 06, 2025
Viaarxiv icon

DAIEN-TTS: Disentangled Audio Infilling for Environment-Aware Text-to-Speech Synthesis

Add code
Sep 18, 2025
Viaarxiv icon

Flexible-length Text Infilling for Discrete Diffusion Models

Add code
Jun 16, 2025
Viaarxiv icon

ClozeMath: Improving Mathematical Reasoning in Language Models by Learning to Fill Equations

Add code
Jun 04, 2025
Viaarxiv icon

LaViDa: A Large Diffusion Language Model for Multimodal Understanding

Add code
May 22, 2025
Viaarxiv icon

Insertion Language Models: Sequence Generation with Arbitrary-Position Insertions

Add code
May 09, 2025
Viaarxiv icon

EFIM: Efficient Serving of LLMs for Infilling Tasks with Improved KV Cache Reuse

Add code
May 29, 2025
Viaarxiv icon

Re-identification of De-identified Documents with Autoregressive Infilling

Add code
May 19, 2025
Viaarxiv icon

Enhancing Spoken Discourse Modeling in Language Models Using Gestural Cues

Add code
Mar 05, 2025
Figure 1 for Enhancing Spoken Discourse Modeling in Language Models Using Gestural Cues
Figure 2 for Enhancing Spoken Discourse Modeling in Language Models Using Gestural Cues
Figure 3 for Enhancing Spoken Discourse Modeling in Language Models Using Gestural Cues
Figure 4 for Enhancing Spoken Discourse Modeling in Language Models Using Gestural Cues
Viaarxiv icon

Exploring Next Token Prediction in Theory of Mind (ToM) Tasks: Comparative Experiments with GPT-2 and LLaMA-2 AI Models

Add code
Apr 22, 2025
Viaarxiv icon