Image Manipulation via Neuro-Symbolic Networks
Abstract
We are interested in image manipulation via natural language text – a task that is extremely useful for multiple AI applications but requires complex reasoning over multi-modal spaces. Recent work on neuro-symbolic approaches has been quite effective in solving such tasks as they offer better modularity, interpretability, and generalizability. A noteworthy such approach is NSCL [9] developed for the task of Visual Question Answering (VQA). We extend NSCL for the image manipulation task and propose a solution referred to as NEUROSIM. Unlike previous works, which either require supervised data training or can only deal with very simple reasoning instructions over single object scenes; NEUROSIM can perform complex multi-hop reasoning over multi-object scenes and requires only weak supervision in the form of annotated data for the VQA task. On the language side, NEUROSIM contains neural modules that parse an instruction into a symbolic program that guides the manipulation. These programs are based on a Domain Specific Language (DSL) comprising of object attributes as well as manipulation operations. On the perceptual side, NEUROSIM contains neural modules which first generate a scene graph of the input image and then change the scene graph representation in accordance with the parsed instruction. To train these modules, we design novel loss functions that are capable of testing the correctness of manipulated object and scene graph representations via query networks that are trained merely on the VQA dataset. An image decoder is trained to render the final image from the manipulated scene graph representation. The entire NEUROSIM pipeline is trained without any intermediate supervision. Extensive experiments demonstrate that our approach is highly competitive with state-of-the-art supervised baselines.