Abstract:3D generative AI enables rapid and accessible creation of 3D models from text or image inputs. However, translating these outputs into physical objects remains a challenge due to the constraints in the physical world. Recent studies have focused on improving the capabilities of 3D generative AI to produce fabricable outputs, with 3D printing as the main fabrication method. However, this workshop paper calls for a broader perspective by considering how fabrication methods align with the capabilities of 3D generative AI. As a case study, we present a novel system using discrete robotic assembly and 3D generative AI to make physical objects. Through this work, we identified five key aspects to consider in a physical making process based on the capabilities of 3D generative AI. 1) Fabrication Constraints: Current text-to-3D models can generate a wide range of 3D designs, requiring fabrication methods that can adapt to the variability of generative AI outputs. 2) Time: While generative AI can generate 3D models in seconds, fabricating physical objects can take hours or even days. Faster production could enable a closer iterative design loop between humans and AI in the making process. 3) Sustainability: Although text-to-3D models can generate thousands of models in the digital world, extending this capability to the real world would be resource-intensive, unsustainable and irresponsible. 4) Functionality: Unlike digital outputs from 3D generative AI models, the fabrication method plays a crucial role in the usability of physical objects. 5) Accessibility: While generative AI simplifies 3D model creation, the need for fabrication equipment can limit participation, making AI-assisted creation less inclusive. These five key aspects provide a framework for assessing how well a physical making process aligns with the capabilities of 3D generative AI and values in the world.
Abstract:Assembling lattices from discrete building blocks enables the composition of large, heterogeneous, and easily reconfigurable objects with desirable mass-to-stiffness ratios. This type of building system may also be referred to as a digital material, as it is constituted from discrete, error-correcting components. Researchers have demonstrated various active structures and even robotic systems that take advantage of the reconfigurable, mass-efficient properties of discrete lattice structures. However, the existing literature has predominantly used open-loop control strategies, limiting the performance of the presented systems. In this paper, we present a novel approach to feedback control of digital lattice structures, leveraging real-time measurements of the system dynamics. We introduce an actuated voxel which constitutes a novel means for actuation of lattice structures. Our control method is based on the Extended Dynamical Mode Decomposition algorithm in conjunction with the Linear Quadratic Regulator and the Koopman Model Predictive Control. The key advantage of our approach lies in its purely data-driven nature, without the need for any prior knowledge of a system's structure. We illustrate the developed method via real experiments with custom-built flexible lattice beam, showing its ability to accomplish various tasks even with minimal sensing and actuation resources. In particular, we address two problems: stabilization together with disturbance attenuation, and reference tracking.
Abstract:We present a system that transforms speech into physical objects by combining 3D generative Artificial Intelligence with robotic assembly. The system leverages natural language input to make design and manufacturing more accessible, enabling individuals without expertise in 3D modeling or robotic programming to create physical objects. We propose utilizing discrete robotic assembly of lattice-based voxel components to address the challenges of using generative AI outputs in physical production, such as design variability, fabrication speed, structural integrity, and material waste. The system interprets speech to generate 3D objects, discretizes them into voxel components, computes an optimized assembly sequence, and generates a robotic toolpath. The results are demonstrated through the assembly of various objects, ranging from chairs to shelves, which are prompted via speech and realized within 5 minutes using a 6-axis robotic arm.