This website uses third party cookies to improve your experience. If you continue browsing or close this notice, you will accept their use.

Learning in Time Varying Games

15 November 2018 at 12:00 PM - 1:00 PM

Room 207, Campus on Viale Romania, 32

Speaker: Mathias Staudigl, Maastricht University

In this paper, we examine the long-term behavior of regret minimizing agents in time-varying games with continuous action spaces. In its most basic form, (external) regret minimization guarantees that an agent’s cumulative payoff is no worse in the long run than that of the agent’s best-fixed action in hindsight. Going beyond this worst-case guarantee, we consider a dynamic regret variant that compares the agent’s accrued rewards to those of any sequence of play. Specializing to a wide class of no-regret strategies based on mirror descent, we derive explicit rates of regret minimization relying only on imperfect gradient observations. We then leverage these results to show that players are able to stay close to Nash equilibrium in time-varying monotone games – and even converge to Nash equilibrium if the sequence of stage games admits a limit.

Joint work with B. Duvocelle, P. Mertikopoulos, and D. Vermeulen