AI for Low-Code for AI
Abstract
Low-code programming allows citizen developers to create programs with minimal coding effort, typically via visual (e.g. drag-and-drop) interfaces. At the same time, recent language-model powered tools such as Copilot and ChatGPT generate programs from natural language instructions. We argue that these modalities are complementary: tools like ChatGPT greatly reduce the need to memorize large APIs but still require their users to read (and modify) textual code, whereas visual tools abstract away most or all textual code but struggle to provide easy access to large APIs. At their intersection, we propose LowCoder, the first low-code tool for developing machine-learning pipelines that supports both a visual programming interface (LowCoder_VP) and a natural language interface (LowCoder_NL). We develop a novel task formulation and benchmark several language models to develop LowCoder_NL. We leverage this tool to provide some of the first insights into whether and how these two modalities help programmers by conducting a user study. We task 20 developers with varying levels of expertise with implementing ML pipelines using LowCoder. Overall, we find that LowCoder is especially useful for (i)~Discoverability: using LowCoder_NL, participants discovered new operators in 75% of the tasks, compared to just 32.5% and 27.5% using web search or scrolling through options respectively, and (ii)~Iterative Composition: 82.5% of tasks were successfully completed and many initial pipelines were further successfully improved. Qualitative analysis shows that language models helped users discover how to implement constructs when they know what to do, but still failed to support novices when they lack clarity on what they want to accomplish. Overall, we demonstrate the benefits of combining the power of language models with visual low-code programming by building LowCoder and conducting a user study.