Murat Kocaoglu, Amin Jaber, et al.
NeurIPS 2019
This paper explores the task of interactive image retrieval using natural language queries, where a user progressively provides input queries to refine a set of retrieval results. Moreover, our work explores this problem in the context of complex image scenes containing multiple objects. We propose Drill-down, an effective framework for encoding multiple queries with an efficient compact state representation that significantly extends current methods for single-round image retrieval. We show that using multiple rounds of natural language queries as input can be surprisingly effective to find arbitrarily specific images of complex scenes. Furthermore, we find that existing image datasets with textual captions can provide a surprisingly effective form of weak supervision for this task. We compare our method with existing sequential encoding and embedding networks, demonstrating superior performance on two proposed benchmarks: automatic image retrieval on a simulated scenario that uses region captions as queries, and interactive image retrieval using real queries from human evaluators.
Murat Kocaoglu, Amin Jaber, et al.
NeurIPS 2019
Xiaoxiao Guo, Shiyu Chang, et al.
AAAI 2019
Zhen Zhang, Yijian Xiang, et al.
NeurIPS 2019
Chi Han, Jiayuan Mao, et al.
NeurIPS 2019