Dialogue-based Role Playing Games (RPGs) require powerful storytelling. The narratives of these may take years to write and typically involve a large creative team. In this work, we demonstrate the potential of large generative text models to assist this process. \textbf{GRIM}, a prototype \textbf{GR}aph-based \textbf{I}nteractive narrative visualization system for ga\textbf{M}es, generates a rich narrative graph with branching storylines that match a high-level narrative description and constraints provided by the designer. Game designers can interactively edit the graph by automatically generating new sub-graphs that fit the edits within the original narrative and constraints. We illustrate the use of \textbf{GRIM} in conjunction with GPT-4, generating branching narratives for four well-known stories with different contextual constraints.
Here, we report a case study implementation of reinforcement learning (RL) to automate operations in the scanning transmission electron microscopy (STEM) workflow. To do so, we design a virtual, prototypical RL environment to test and develop a network to autonomously align the electron beam without prior knowledge. Using this simulator, we evaluate the impact of environment design and algorithm hyperparameters on alignment accuracy and learning convergence, showing robust convergence across a wide hyperparameter space. Additionally, we deploy a successful model on the microscope to validate the approach and demonstrate the value of designing appropriate virtual environments. Consistent with simulated results, the on-microscope RL model achieves convergence to the goal alignment after minimal training. Overall, the results highlight that by taking advantage of RL, microscope operations can be automated without the need for extensive algorithm design, taking another step towards augmenting electron microscopy with machine learning methods.