Dilated Convolution for Time Series Learning
Wang Zhang, Subhro Das, et al.
ICASSP 2025
No-regret learning has a long history of being closely connected to game theory. Recent works have devised uncoupled no-regret learning dynamics that, when adopted by all the players in normal-form games, converge to various equilibrium solutions at a near-optimal rate of , a significant improvement over the rate of classic no-regret learners. However, analogous convergence results are scarce in Markov games, a more generic setting that lays the foundation for multi-agent reinforcement learning. In this work, we close this gap by showing that the optimistic-follow-the-regularized-leader (OFTRL) algorithm, together with appropriate value update procedures, can find -approximate (coarse) correlated equilibria in full-information general-sum Markov games within iterations. Numerical results are also included to corroborate our theoretical findings.
Wang Zhang, Subhro Das, et al.
ICASSP 2025
Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
Michael Muller, Anna Kantosalo, et al.
CHI 2024
David W. Jacobs, Daphna Weinshall, et al.
IEEE Transactions on Pattern Analysis and Machine Intelligence