Abstract:Embodied visual planning aims to enable manipulation tasks by imagining how a scene evolves toward a desired goal and using the imagined trajectories to guide actions. Video diffusion models, through their image-to-video generation capability, provide a promising foundation for such visual imagination. However, existing approaches are largely forward predictive, generating trajectories conditioned on the initial observation without explicit goal modeling, thus often leading to spatial drift and goal misalignment. To address these challenges, we propose Envision, a diffusion-based framework that performs visual planning for embodied agents. By explicitly constraining the generation with a goal image, our method enforces physical plausibility and goal consistency throughout the generated trajectory. Specifically, Envision operates in two stages. First, a Goal Imagery Model identifies task-relevant regions, performs region-aware cross attention between the scene and the instruction, and synthesizes a coherent goal image that captures the desired outcome. Then, an Env-Goal Video Model, built upon a first-and-last-frame-conditioned video diffusion model (FL2V), interpolates between the initial observation and the goal image, producing smooth and physically plausible video trajectories that connect the start and goal states. Experiments on object manipulation and image editing benchmarks demonstrate that Envision achieves superior goal alignment, spatial consistency, and object preservation compared to baselines. The resulting visual plans can directly support downstream robotic planning and control, providing reliable guidance for embodied agents.




Abstract:Perceptions of gender are a significant aspect of human-human interaction, and gender has wide-reaching social implications for robots deployed in contexts where they are expected to interact with humans. This work explored two flexible modalities for communicating gender in robots--voice and appearance--and we studied their individual and combined influences on a robot's perceived gender. We evaluated the perception of a robot's gender through three video-based studies. First, we conducted a study (n=65) on the gender perception of robot voices by varying speaker identity and pitch. Second, we conducted a study (n=93) on the gender perception of robot clothing designed for two different tasks. Finally, building on the results of the first two studies, we completed a large integrative video-based study (n=273) involving two human-robot interaction tasks. We found that voice and clothing can be used to reliably establish a robot's perceived gender, and that combining these two modalities can have different effects on the robot's perceived gender. Taken together, these results inform the design of robot voices and clothing as individual and interacting components in the perceptions of robot gender.