Ask-to-Clarify: Resolving Instruction Ambiguity through Multi-turn Dialogue
Abstract
The Ask-to-Clarify framework uses a VLM for collaboration and a diffusion model for action generation, enabling embodied agents to handle ambiguous instructions through multi-turn dialogue and outperform existing VLAs in real-world tasks.
The ultimate goal of embodied agents is to create collaborators that can interact with humans, not mere executors that passively follow instructions. This requires agents to communicate, coordinate, and adapt their actions based on human feedback. Recently, advances in VLAs have offered a path toward this goal. However, most current VLA-based embodied agents operate in a one-way mode: they receive an instruction and execute it without feedback. This approach fails in real-world scenarios where instructions are often ambiguous. In this paper, we address this problem with the Ask-to-Clarify framework. Our framework first resolves ambiguous instructions by asking questions in a multi-turn dialogue. Then it generates low-level actions end-to-end. Specifically, the Ask-to-Clarify framework consists of two components, one VLM for collaboration and one diffusion for action. We also introduce a connection module that generates conditions for the diffusion based on the output of the VLM. This module adjusts the observation by instructions to create reliable conditions. We train our framework with a two-stage knowledge-insulation strategy. First, we fine-tune the collaboration component using ambiguity-solving dialogue data to handle ambiguity. Then, we integrate the action component while freezing the collaboration one. This preserves the interaction abilities while fine-tuning the diffusion to generate actions. The training strategy guarantees our framework can first ask questions, then generate actions. During inference, a signal detector functions as a router that helps our framework switch between asking questions and taking actions. We evaluate the Ask-to-Clarify framework in 8 real-world tasks, where it outperforms existing state-of-the-art VLAs. The results suggest that our proposed framework, along with the training strategy, provides a path toward collaborative embodied agents.
Community
The ultimate goal of embodied agents is to create collaborators that can interact with humans, not mere executors that passively follow instructions.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Do What? Teaching Vision-Language-Action Models to Reject the Impossible (2025)
- villa-X: Enhancing Latent Action Modeling in Vision-Language-Action Models (2025)
- CLAW: A Vision-Language-Action Framework for Weight-Aware Robotic Grasping (2025)
- SeqVLA: Sequential Task Execution for Long-Horizon Manipulation with Completion-Aware Vision-Language-Action Model (2025)
- Large VLM-based Vision-Language-Action Models for Robotic Manipulation: A Survey (2025)
- Teaching Language Models To Gather Information Proactively (2025)
- DialNav: Multi-turn Dialog Navigation with a Remote Guide (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper