Articulate That Object Part (ATOP): 3D Part Articulation from Text and via Motion Personalization

Simon Fraser University, Vancouver, Canada

icon

Given a textured mesh in rest state (left), the part to articulate (highlighted), and a text prompt describing part motion as input, we generate multi-view motion frames (middle) of plausible part articulations specific to the input object using a personalized motion diffusion model. We then transfer the motion from the generated frames to the 3D mesh (right) to obtain articulation parameters. Note that the input meshes are the top retrieval results from Objaverse for the ``Trash can", "Cabinet" and ``Lamp" categories, i.e., not cherry-picked, showcasing the generalization ability of our approach.

Abstract


We present ATOP (Articulate That Object Part), a novel few-shot method based on motion personalization to articulate a static 3D object with respect to a part and its motion as prescribed in a text prompt. Given the scarcity of available datasets with motion attribute annotations, existing methods struggle to generalize well in this task. In our work, the text input allows us to tap into the power of modern-day diffusion models to generate plausible motion samples for the right object category and part. In turn, the input 3D object provides ``image prompting'' to personalize the generated motion to the very input object. Our method starts with a few-shot finetuning to inject articulation awareness to current diffusion models to learn a unique motion identifier associated with the target object part. Our finetuning is applied to a pre-trained diffusion model for controllable multi-view motion generation, trained with a small collection of reference motion frames demonstrating appropriate part motion. The resulting motion model can then be employed to realize plausible motion of the input 3D object from multiple views. At last, we transfer the personalized motion to the 3D space of the object via differentiable rendering to optimize part articulation parameters by a score distillation sampling loss.

icon

Inference Pipeline: Given a textured mesh as input, we first segment the mesh using open-vocabulary mesh segmentation methods like Part-SLiP or PartSTaD, then render multi-view images and masks of this mesh and pass it through a controllable multi-view motion model which hallucinates part motion for given camera poses which is personalized to the target input shape. Different background color of images indicates different views. Once this motion samples are obtained we then transfer the motion to 3D space by directly optimizing for the motion-axis and origin.