Towards Automating the AI Operations Lifecycle
Matthew Arnold, Jeffrey Boston, et al.
MLSys 2020
Dialogue State Tracking (DST), a key component of task-oriented conversation systems, represents user intentions by determining the values of pre-defined slots. Existing approaches use hand-crafted templates and additional slot information to prompt large pre-trained language models and elicit slot values from the dialogue history. This requires significant manual effort and domain knowledge to design effective prompts, and limits generalizability to new domains and tasks. In this work, we propose DiSTRICT, a generalizable in-context tuning approach for DST that retrieves highly relevant training examples for a given dialogue to create prompts in an automated manner without the need for human input. Additionally, our approach achieves and exceeds the performance of prior work using a much smaller model, providing an important advantage for real-world deployments that often have limited resource availability. Experiments on the MultiWOZ benchmark datasets show that DiSTRICT outperforms existing approaches in various zero-shot and few-shot settings.
Matthew Arnold, Jeffrey Boston, et al.
MLSys 2020
Shivashankar Subramanian, Ioana Baldini, et al.
IAAI 2020
Gabriele Picco, Lam Thanh Hoang, et al.
EMNLP 2021
Kevin Gu, Eva Tuecke, et al.
ICML 2024