Workshop

Foundations of Agentic Systems Theory

Abstract

As with any complex system, the most interesting and consequential behaviors often arise not from the parts in isolation, but from the patterns of interaction between them. The current development of agentic AI has largely ignored these considerations, instead focusing on designing more (individually) capable agents. Failing to consider these effects as AI agents become more widespread will lead to a significant underestimation in both their capabilities and risks.

There is an extensive body of knowledge underlying these interaction effects across various fields, but it’s not currently clear how applicable existing theoretical tools are to agentic AI systems. Tools from control theory and game/economic theory typically impose strong structural assumptions on both agents and the overall system (such as the form of objective functions, state evolution/dynamics, or degree of rationality) in efforts to obtain concrete results. On the other hand, methods from the social sciences use observations of human behavior, cultural contexts, and social norms to make more measured claims about probable patterns within the complexity and variability of human experience. Agentic AI systems don’t cleanly map to either of these settings. The underlying LLM in an AI agent does not possess the same rational behavior as idealized control/game/economic agents, nor does it exhibit the culturally/emotionally/evolutionarily shaped behaviors that characterize human agents.

The Foundations of Agentic Systems Theory (FAST) workshop broadly aims to help evaluate the degree to which existing theory can be used to describe the behavior of agentic AI systems. Drawing from a variety of fields (notably beyond computer science, including developmental psychology, neuroscience, and social dynamics), FAST will explore both if and how existing mechanisms of emergent behavior from other systems carry over to systems of LLM-based agents, the properties of the underlying agents (and their LLMs) that facilitate these behaviors, and our ability to control/induce desirable system-wide outcomes. We strongly seek interdisciplinary participation (via both contributions and invited talks), with the ultimate goal of fundamentally contributing to a better understanding of the underlying processes that govern the system-level behavior (and risks) of agentic AI.