Picture for Nanyun Peng

Nanyun Peng

Shammie

Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding

Add code
Sep 05, 2024
Viaarxiv icon

REFFLY: Melody-Constrained Lyrics Editing Model

Add code
Aug 30, 2024
Figure 1 for REFFLY: Melody-Constrained Lyrics Editing Model
Figure 2 for REFFLY: Melody-Constrained Lyrics Editing Model
Figure 3 for REFFLY: Melody-Constrained Lyrics Editing Model
Figure 4 for REFFLY: Melody-Constrained Lyrics Editing Model
Viaarxiv icon

ARMADA: Attribute-Based Multimodal Data Augmentation

Add code
Aug 19, 2024
Figure 1 for ARMADA: Attribute-Based Multimodal Data Augmentation
Figure 2 for ARMADA: Attribute-Based Multimodal Data Augmentation
Figure 3 for ARMADA: Attribute-Based Multimodal Data Augmentation
Figure 4 for ARMADA: Attribute-Based Multimodal Data Augmentation
Viaarxiv icon

Unlocking Exocentric Video-Language Data for Egocentric Video Representation Learning

Add code
Aug 07, 2024
Viaarxiv icon

QUDSELECT: Selective Decoding for Questions Under Discussion Parsing

Add code
Aug 02, 2024
Figure 1 for QUDSELECT: Selective Decoding for Questions Under Discussion Parsing
Figure 2 for QUDSELECT: Selective Decoding for Questions Under Discussion Parsing
Figure 3 for QUDSELECT: Selective Decoding for Questions Under Discussion Parsing
Figure 4 for QUDSELECT: Selective Decoding for Questions Under Discussion Parsing
Viaarxiv icon

Are Large Language Models Capable of Generating Human-Level Narratives?

Add code
Jul 18, 2024
Figure 1 for Are Large Language Models Capable of Generating Human-Level Narratives?
Figure 2 for Are Large Language Models Capable of Generating Human-Level Narratives?
Figure 3 for Are Large Language Models Capable of Generating Human-Level Narratives?
Figure 4 for Are Large Language Models Capable of Generating Human-Level Narratives?
Viaarxiv icon

Evaluating Human Alignment and Model Faithfulness of LLM Rationale

Add code
Jun 28, 2024
Figure 1 for Evaluating Human Alignment and Model Faithfulness of LLM Rationale
Figure 2 for Evaluating Human Alignment and Model Faithfulness of LLM Rationale
Figure 3 for Evaluating Human Alignment and Model Faithfulness of LLM Rationale
Figure 4 for Evaluating Human Alignment and Model Faithfulness of LLM Rationale
Viaarxiv icon

LLM-A*: Large Language Model Enhanced Incremental Heuristic Search on Path Planning

Add code
Jun 20, 2024
Viaarxiv icon

Synchronous Faithfulness Monitoring for Trustworthy Retrieval-Augmented Generation

Add code
Jun 19, 2024
Viaarxiv icon

Adaptable Logical Control for Large Language Models

Add code
Jun 19, 2024
Figure 1 for Adaptable Logical Control for Large Language Models
Figure 2 for Adaptable Logical Control for Large Language Models
Figure 3 for Adaptable Logical Control for Large Language Models
Figure 4 for Adaptable Logical Control for Large Language Models
Viaarxiv icon