Al fine di migliorare la tua esperienza di navigazione, questo sito utilizza i cookie di profilazione di terze parti. Chiudendo questo banner o accedendo ad un qualunque elemento sottostante acconsenti all’uso dei cookie.

Learning in Time Varying Games

15 novembre 2018 ore 12:00 - 13:00

Aula 207, Sede di Viale Romania, 32

Speaker: Mathias Staudigl, Maastricht University

In this paper, we examine the long-term behavior of regret minimizing agents in time-varying games with continuous action spaces. In its most basic form, (external) regret minimization guarantees that an agent’s cumulative payoff is no worse in the long run than that of the agent’s best-fixed action in hindsight. Going beyond this worst-case guarantee, we consider a dynamic regret variant that compares the agent’s accrued rewards to those of any sequence of play. Specializing to a wide class of no-regret strategies based on mirror descent, we derive explicit rates of regret minimization relying only on imperfect gradient observations. We then leverage these results to show that players are able to stay close to Nash equilibrium in time-varying monotone games – and even converge to Nash equilibrium if the sequence of stage games admits a limit.

Joint work with B. Duvocelle, P. Mertikopoulos, and D. Vermeulen