Publication
EC 2024
Workshop

Foundation Models and Game Theory

Abstract

Foundational models (FMs) such as Large Language Models (LLMs) are machine learning models that have been pre-trained with large datasets and can be efficiently adapted to various downstream tasks. Given the tremendous impacts that FMs have been making, FMs could be a key step towards building AI agents that can autonomously make decisions to cooperate or compete with other AI agents and humans to achieve their ultimate goals in complex multi-agent environments. In particular, FMs have been pushing the limits of the complexity of the environments in which such AI agents interact via natural language and multimodal data. However, in many cases, the strategic reasoning capabilities of such agents has been severely lacking. In parallel, game theory has been laying the strong foundation for the strategic reasoning capabilities of such AI agents. It is therefore the goal of this workshop to bring together researchers working on foundation models, game theory, and their intersection to exchange ideas on how to push the frontiers of foundation models and game theory towards realizing better (e.g., more efficient, more sustainable, happier, safer) society with strategic AI agents. The topics of the workshop include but are not limited to: - The creation of strategic LLMs. How to augment existing LLMs with strategic reasoning capabilities based on game theory. - FMs for game theory. How to use FMs to broaden the applicability of game theory and mechanism design to complex multi-agent systems that cannot be handled with traditional approaches. For example, one may train FMs for solving complex games or combine natural language capabilities, such as the ones provided by LLMs, with game theory in settings such as negotiation. FMs may also substantially improve data-efficiency of learning approaches in game theory and mechanism design. Moreover, LLMs may be used as agents in the study of behavioral game theory. - Game theory for a FM. Currently, FMs require huge amounts of data and computational resources to train. It may be possible to use game theoretic approaches to improve the learning process of FMs, reduce the computational or data requirements, and enhance the performance or robustness of the FMs at the inference phase. - Game theory for multi-FM applications. Orchestrating multiple FMs is expected to yield significant benefits in terms of the efficiency and performance of FM-based applications. To this end, approaches of game theory and mechanism design may be used, for example, to improve the way applications composed of multiple LLMs are built. Game theory would also give us better understandings of the consequences of the interactions of multiple FMs. - Game theory for the societal implications of FMs. For example, we may better understand the impacts of the FMs on society with game theoretic methods and design mechanisms that are appropriate for societies that involve both humans and FMs. This workshop builds on the DIMACS Workshop on Foundation Models, Large Language Models, and Game Theory, which was held on October 19-20, 2023.

Date

Publication

EC 2024