Alert button
Picture for Samihan Dani

Samihan Dani

Alert button

Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems

May 03, 2023
Zhiyu Lin, Upol Ehsan, Rohan Agarwal, Samihan Dani, Vidushi Vashishth, Mark Riedl

Figure 1 for Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems
Figure 2 for Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems
Figure 3 for Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems
Figure 4 for Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems

Generative Artificial Intelligence systems have been developed for image, code, story, and game generation with the goal of facilitating human creativity. Recent work on neural generative systems has emphasized one particular means of interacting with AI systems: the user provides a specification, usually in the form of prompts, and the AI system generates the content. However, there are other configurations of human and AI coordination, such as co-creativity (CC) in which both human and AI systems can contribute to content creation, and mixed-initiative (MI) in which both human and AI systems can initiate content changes. In this paper, we define a hypothetical human-AI configuration design space consisting of different means for humans and AI systems to communicate creative intent to each other. We conduct a human participant study with 185 participants to understand how users want to interact with differently configured MI-CC systems. We find out that MI-CC systems with more extensive coverage of the design space are rated higher or on par on a variety of creative and goal-completion metrics, demonstrating that wider coverage of the design space can improve user experience and achievement when using the system; Preference varies greatly between expertise groups, suggesting the development of adaptive, personalized MI-CC systems; Participants identified new design space dimensions including scrutability -- the ability to poke and prod at models -- and explainability.

* Accepted by ICCC'23 
Viaarxiv icon

Guiding Neural Story Generation with Reader Models

Dec 16, 2021
Xiangyu Peng, Kaige Xie, Amal Alabdulkarim, Harshith Kayam, Samihan Dani, Mark O. Riedl

Figure 1 for Guiding Neural Story Generation with Reader Models
Figure 2 for Guiding Neural Story Generation with Reader Models
Figure 3 for Guiding Neural Story Generation with Reader Models
Figure 4 for Guiding Neural Story Generation with Reader Models

Automated storytelling has long captured the attention of researchers for the ubiquity of narratives in everyday life. However, it is challenging to maintain coherence and stay on-topic toward a specific ending when generating narratives with neural language models. In this paper, we introduce Story generation with Reader Models (StoRM), a framework in which a reader model is used to reason about the story should progress. A reader model infers what a human reader believes about the concepts, entities, and relations about the fictional story world. We show how an explicit reader model represented as a knowledge graph affords story coherence and provides controllability in the form of achieving a given story world state goal. Experiments show that our model produces significantly more coherent and on-topic stories, outperforming baselines in dimensions including plot plausibility and staying on topic. Our system also outperforms outline-guided story generation baselines in composing given concepts without ordering.

Viaarxiv icon