Alert button
Picture for Mark Riedl

Mark Riedl

Alert button

An Ontology of Co-Creative AI Systems

Oct 11, 2023
Zhiyu Lin, Mark Riedl

The term co-creativity has been used to describe a wide variety of human-AI assemblages in which human and AI are both involved in a creative endeavor. In order to assist with disambiguating research efforts, we present an ontology of co-creative systems, focusing on how responsibilities are divided between human and AI system and the information exchanged between them. We extend Lubart's original ontology of creativity support tools with three new categories emphasizing artificial intelligence: computer-as-subcontractor, computer-as-critic, and computer-as-teammate, some of which have sub-categorizations.

* Submitted to NeurIPS Workshop on ML for Creativity and Design 2023 
Viaarxiv icon

A Controllable Co-Creative Agent for Game System Design

Aug 04, 2023
Rohan Agarwal, Zhiyu Lin, Mark Riedl

Figure 1 for A Controllable Co-Creative Agent for Game System Design
Figure 2 for A Controllable Co-Creative Agent for Game System Design
Figure 3 for A Controllable Co-Creative Agent for Game System Design
Figure 4 for A Controllable Co-Creative Agent for Game System Design

Many advancements have been made in procedural content generation for games, and with mixed-initiative co-creativity, have the potential for great benefits to human designers. However, co-creative systems for game generation are typically limited to specific genres, rules, or games, limiting the creativity of the designer. We seek to model games abstractly enough to apply to any genre, focusing on designing game systems and mechanics, and create a controllable, co-creative agent that can collaborate on these designs. We present a model of games using state-machine-like components and resource flows, a set of controllable metrics, a design evaluator simulating playthroughs with these metrics, and an evolutionary design balancer and generator. We find this system to be both able to express a wide range of games and able to be human-controllable for future co-creative applications.

* Thesis 
Viaarxiv icon

Thespian: Multi-Character Text Role-Playing Game Agents

Aug 03, 2023
Christopher Cui, Xiangyu Peng, Mark Riedl

Figure 1 for Thespian: Multi-Character Text Role-Playing Game Agents
Figure 2 for Thespian: Multi-Character Text Role-Playing Game Agents
Figure 3 for Thespian: Multi-Character Text Role-Playing Game Agents
Figure 4 for Thespian: Multi-Character Text Role-Playing Game Agents

Text-adventure games and text role-playing games are grand challenges for reinforcement learning game playing agents. Text role-playing games are open-ended environments where an agent must faithfully play a particular character. We consider the distinction between characters and actors, where an actor agent has the ability to play multiple characters. We present a framework we call a thespian agent that can learn to emulate multiple characters along with a soft prompt that can be used to direct it as to which character to play at any time. We further describe an attention mechanism that allows the agent to learn new characters that are based on previously learned characters in a few-shot fashion. We show that our agent outperforms the state of the art agent framework in multi-character learning and few-shot learning.

* 11 pages 
Viaarxiv icon

Ambient Adventures: Teaching ChatGPT on Developing Complex Stories

Aug 03, 2023
Zexin Chen, Eric Zhou, Kenneth Eaton, Xiangyu Peng, Mark Riedl

Figure 1 for Ambient Adventures: Teaching ChatGPT on Developing Complex Stories
Figure 2 for Ambient Adventures: Teaching ChatGPT on Developing Complex Stories

Imaginative play is an area of creativity that could allow robots to engage with the world around them in a much more personified way. Imaginary play can be seen as taking real objects and locations and using them as imaginary objects and locations in virtual scenarios. We adopted the story generation capability of large language models (LLMs) to obtain the stories used for imaginary play with human-written prompts. Those generated stories will be simplified and mapped into action sequences that can guide the agent in imaginary play. To evaluate whether the agent can successfully finish the imaginary play, we also designed a text adventure game to simulate a house as the playground for the agent to interact.

Viaarxiv icon

Dialogue Shaping: Empowering Agents through NPC Interaction

Jul 28, 2023
Wei Zhou, Xiangyu Peng, Mark Riedl

Figure 1 for Dialogue Shaping: Empowering Agents through NPC Interaction
Figure 2 for Dialogue Shaping: Empowering Agents through NPC Interaction
Figure 3 for Dialogue Shaping: Empowering Agents through NPC Interaction
Figure 4 for Dialogue Shaping: Empowering Agents through NPC Interaction

One major challenge in reinforcement learning (RL) is the large amount of steps for the RL agent needs to converge in the training process and learn the optimal policy, especially in text-based game environments where the action space is extensive. However, non-player characters (NPCs) sometimes hold some key information about the game, which can potentially help to train RL agents faster. Thus, this paper explores how to interact and converse with NPC agents to get the key information using large language models (LLMs), as well as incorporate this information to speed up RL agent's training using knowledge graphs (KGs) and Story Shaping.

Viaarxiv icon

Improving Language Models with Advantage-based Offline Policy Gradients

May 24, 2023
Ashutosh Baheti, Ximing Lu, Faeze Brahman, Ronan Le Bras, Maarten Sap, Mark Riedl

Figure 1 for Improving Language Models with Advantage-based Offline Policy Gradients
Figure 2 for Improving Language Models with Advantage-based Offline Policy Gradients
Figure 3 for Improving Language Models with Advantage-based Offline Policy Gradients
Figure 4 for Improving Language Models with Advantage-based Offline Policy Gradients

Improving language model generations according to some user-defined quality or style constraints is challenging. Typical approaches include learning on additional human-written data, filtering ``low-quality'' data using heuristics and/or using reinforcement learning with human feedback (RLHF). However, filtering can remove valuable training signals, whereas data collection and RLHF constantly require additional human-written or LM exploration data which can be costly to obtain. A natural question to ask is ``Can we leverage RL to optimize LM utility on existing crowd-sourced and internet data?'' To this end, we present Left-over Lunch RL (LoL-RL), a simple training algorithm that uses offline policy gradients for learning language generation tasks as a 1-step RL game. LoL-RL can finetune LMs to optimize arbitrary classifier-based or human-defined utility functions on any sequence-to-sequence data. Experiments with five different language generation tasks using models of varying sizes and multiple rewards show that models trained with LoL-RL can consistently outperform the best supervised learning models. We also release our experimental code. https://github.com/abaheti95/LoL-RL

Viaarxiv icon

Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer

May 20, 2023
Kaige Xie, Tong Yu, Haoliang Wang, Junda Wu, Handong Zhao, Ruiyi Zhang, Kanak Mahadik, Ani Nenkova, Mark Riedl

Figure 1 for Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer
Figure 2 for Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer
Figure 3 for Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer
Figure 4 for Few-Shot Dialogue Summarization via Skeleton-Assisted Prompt Transfer

In real-world scenarios, labeled samples for dialogue summarization are usually limited (i.e., few-shot) due to high annotation costs for high-quality dialogue summaries. To efficiently learn from few-shot samples, previous works have utilized massive annotated data from other downstream tasks and then performed prompt transfer in prompt tuning so as to enable cross-task knowledge transfer. However, existing general-purpose prompt transfer techniques lack consideration for dialogue-specific information. In this paper, we focus on improving the prompt transfer from dialogue state tracking to dialogue summarization and propose Skeleton-Assisted Prompt Transfer (SAPT), which leverages skeleton generation as extra supervision that functions as a medium connecting the distinct source and target task and resulting in the model's better consumption of dialogue state information. To automatically extract dialogue skeletons as supervised training data for skeleton generation, we design a novel approach with perturbation-based probes requiring neither annotation effort nor domain knowledge. Training the model on such skeletons can also help preserve model capability during prompt transfer. Our method significantly outperforms existing baselines. In-depth analyses demonstrate the effectiveness of our method in facilitating cross-task knowledge transfer in few-shot dialogue summarization.

Viaarxiv icon

Why Don't You Do Something About It? Outlining Connections between AI Explanations and User Actions

May 10, 2023
Gennie Mansi, Mark Riedl

Figure 1 for Why Don't You Do Something About It? Outlining Connections between AI Explanations and User Actions

A core assumption of explainable AI systems is that explanations change what users know, thereby enabling them to act within their complex socio-technical environments. Despite the centrality of action, explanations are often organized and evaluated based on technical aspects. Prior work varies widely in the connections it traces between information provided in explanations and resulting user actions. An important first step in centering action in evaluations is understanding what the XAI community collectively recognizes as the range of information that explanations can present and what actions are associated with them. In this paper, we present our framework, which maps prior work on information presented in explanations and user action, and we discuss the gaps we uncovered about the information presented to users.

* 9 pages, 1 figure 
Viaarxiv icon

Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems

May 03, 2023
Zhiyu Lin, Upol Ehsan, Rohan Agarwal, Samihan Dani, Vidushi Vashishth, Mark Riedl

Figure 1 for Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems
Figure 2 for Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems
Figure 3 for Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems
Figure 4 for Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems

Generative Artificial Intelligence systems have been developed for image, code, story, and game generation with the goal of facilitating human creativity. Recent work on neural generative systems has emphasized one particular means of interacting with AI systems: the user provides a specification, usually in the form of prompts, and the AI system generates the content. However, there are other configurations of human and AI coordination, such as co-creativity (CC) in which both human and AI systems can contribute to content creation, and mixed-initiative (MI) in which both human and AI systems can initiate content changes. In this paper, we define a hypothetical human-AI configuration design space consisting of different means for humans and AI systems to communicate creative intent to each other. We conduct a human participant study with 185 participants to understand how users want to interact with differently configured MI-CC systems. We find out that MI-CC systems with more extensive coverage of the design space are rated higher or on par on a variety of creative and goal-completion metrics, demonstrating that wider coverage of the design space can improve user experience and achievement when using the system; Preference varies greatly between expertise groups, suggesting the development of adaptive, personalized MI-CC systems; Participants identified new design space dimensions including scrutability -- the ability to poke and prod at models -- and explainability.

* Accepted by ICCC'23 
Viaarxiv icon

Story Shaping: Teaching Agents Human-like Behavior with Stories

Jan 24, 2023
Xiangyu Peng, Christopher Cui, Wei Zhou, Renee Jia, Mark Riedl

Figure 1 for Story Shaping: Teaching Agents Human-like Behavior with Stories
Figure 2 for Story Shaping: Teaching Agents Human-like Behavior with Stories
Figure 3 for Story Shaping: Teaching Agents Human-like Behavior with Stories
Figure 4 for Story Shaping: Teaching Agents Human-like Behavior with Stories

Reward design for reinforcement learning agents can be difficult in situations where one not only wants the agent to achieve some effect in the world but where one also cares about how that effect is achieved. For example, we might wish for an agent to adhere to a tacit understanding of commonsense, align itself to a preference for how to behave for purposes of safety, or taking on a particular role in an interactive game. Storytelling is a mode for communicating tacit procedural knowledge. We introduce a technique, Story Shaping, in which a reinforcement learning agent infers tacit knowledge from an exemplar story of how to accomplish a task and intrinsically rewards itself for performing actions that make its current environment adhere to that of the inferred story world. Specifically, Story Shaping infers a knowledge graph representation of the world state from observations, and also infers a knowledge graph from the exemplar story. An intrinsic reward is generated based on the similarity between the agent's inferred world state graph and the inferred story world graph. We conducted experiments in text-based games requiring commonsense reasoning and shaping the behaviors of agents as virtual game characters.

Viaarxiv icon