Effectively manipulating articulated objects in household scenarios is a crucial step toward achieving general embodied artificial intelligence. Mainstream research in 3D vision has primarily focused on manipulation through depth perception and pose detection. However, in real-world environments, these methods often face challenges due to imperfect depth perception, such as with transparent lids and reflective handles. Moreover, they generally lack the diversity in part-based interactions required for flexible and adaptable manipulation. To address these challenges, we introduced a large-scale part-centric dataset for articulated object manipulation that features both photo-realistic material randomizations and detailed annotations of part-oriented, scene-level actionable interaction poses. We evaluated the effectiveness of our dataset by integrating it with several state-of-the-art methods for depth estimation and interaction pose prediction. Additionally, we proposed a novel modular framework that delivers superior and robust performance for generalizable articulated object manipulation. Our extensive experiments demonstrate that our dataset significantly improves the performance of depth perception and actionable interaction pose prediction in both simulation and real-world scenarios.
We introduce a large-scale part-centric dataset for material-agnostic articulated object manipulation. It encompasses 19 common household articulated categories, totaling 918 object instances, 240k photo-realistic rendering images, and 8 billion scene-level actionable interaction poses. GAPartManip enables robust zero-shot sim-to-real transfer for accomplishing articulated object manipulation tasks.
Framework overview. Given IR images and raw depth, the depth reconstruction module first performs depth recovery. Subsequently, the pose prediction module generates a 7-DOF actionable pose and a 3-DOF motion directive based on the reconstructed depth. Finally, the local planner module carries out the action execution.
Qualitative Results for Depth Estimation in the Real World. Our refined depth is more robust for transparent and translucent lids and small handles compared to RAFT-Stereo. Zoom in to better observe small parts like handles and knobs.
Qualitative comparison of actionable pose prediction on synthetic data.
Qualitative Results For Real-world Manipulation. The top-15 scored actionable poses are displayed, with the red gripper representing the top-1 pose.
@article{cui2024gapartmanip,
title={GAPartManip: A Large-scale Part-centric Dataset for Material-Agnostic Articulated Object Manipulation},
author={Cui, Wenbo and Zhao, Chengyang and Wei, Songlin and Zhang, Jiazhao and Geng, Haoran and Chen, Yaran and Wang, He},
journal={arXiv preprint arXiv:2411.18276},
year={2024}
}