Multitask Multimodal Prompted Training for Interactive Embodied Task Completion

Jul 1, 2023ยท
George Pantazopoulos
George Pantazopoulos
,
Malvina Nikandrou
,
Amit Parekh
,
Bhathiya Hemanthage
,
Arash Eshghi
,
Ioannis Konstas
,
Verena Rieser
,
Oliver Lemon
,
Alessandro Suglia
ยท 0 min read
Abstract
Interactive and embodied tasks pose at least two fundamental challenges to existing Vision & Language (VL) models, including 1) grounding language in trajectories of actions and observations, and 2) referential disambiguation. To tackle these challenges, we propose an Embodied MultiModal Agent (EMMA)’:’ a unified encoder-decoder model that reasons over images and trajectories, and casts action prediction as multimodal text generation. By unifying all tasks as text generation, EMMA learns a language of actions which facilitates transfer across tasks. Different to previous modular approaches with independently trained components, we use a single multitask model where each task contributes to goal completion. EMMA performs on par with similar models on several VL benchmarks and sets a new state-of-the-art performance (36.81% success rate) on the Dialog-guided Task Completion (DTC), a benchmark to evaluate dialog-guided agents in the Alexa Arena.
Type
Publication
In 2023 Conference on Empirical Methods in Natural Language Processing