We study the task of language-conditioned pick and place in clutter, where a robot should grasp a target object in open clutter and move it to a specified place. Some approaches learn end-to-end policies with features from vision foundation models, requiring large datasets. Others combine foundation models in a zero-shot setting, suffering from cascading errors. In addition, they primarily leverage vision and language foundation models, focusing less on action priors. In this paper, we aim to develop an effective policy by integrating foundation priors from vision, language, and action. We propose A$^2$, an action prior alignment method that aligns unconditioned action priors with 3D vision-language priors by learning one attention layer. The alignment formulation enables our policy to train with less data and preserve zero-shot generalization capabilities. We show that a shared policy for both pick and place actions enhances the performance for each task, and introduce a policy adaptation scheme to accommodate the multi-modal nature of actions. Extensive experiments in simulation and the real-world show that our policy achieves higher task success rates with fewer steps for both pick and place tasks in clutter, effectively generalizing to unseen objects and language instructions.
Overview. Given the language instruction and RGB-D image(s), the vision-language model MaskCLIP extracts dense patch-level features, which are projected into 3D representations, including a feature cloud, a similarity cloud, and a point cloud. In addition, the action foundation model generates action candidates. Based on these foundation priors, our policy conducts alignment for action planning.
@misc{xu2025a2,
title={Efficient Alignment of Unconditioned Action Prior for Language-conditioned Pick and Place in Clutter},
author={Kechun Xu and Xunlong Xia and Kaixuan Wang and Yifei Yang and Yunxuan Mao and Bing Deng and Rong Xiong and Yue Wang},
journal={arXiv preprint arXiv:2503.09423},
year={2025}
}