Current Trends in Graph and Stochastic Games

This event is a GAMENET workshop, held on April 07-08, 2022, in Maastricht, the Netherlands.

Aim and Topics

The aim of the workshop is to bring together researchers working on graph and stochastic games, various forms of which are studied in computer science, economics and mathematics. Despite clear differences in the emphasis, these fields share a number of research goals and use similar proof techniques and there is a scope for an interdisciplinary collaboration. The workshop will serve as a forum to facilitate such collaboration.

Topics of interest include

• turn-based and concurrent graph games
• repeated and stochastic games
• (computational) complexity of good strategies and of equilibria

Venue

The workshop will be held at the School of Business and Economics, Maastricht University.
Tongersestraat 53, 6211 LM Maastricht, Netherlands

How to reach Maastricht?

• via Brussels Airport (Zaventem) and subsequent train service (approx. 2:00h) to Maastricht.
• via Amsterdam Schiphol Airport and subsequent train service (approx. 2:30–3:00h) to Maastricht.
• via Düsseldorf Airport and subsequent train service (approx. 2:30–3:15h) to Maastricht.
• via Cologne-Bonn Airport and subsequent train service (approx. 2:30h) to Maastricht central station.
• via Eindhoven Airport and subsequent bus line 400 or 401 to Eindhoven Centraal (approximately 00:30h), and train to Maastricht (approximately 01:00h).
• directly to Maastricht-Aachen Airport and subsequent taxi to Maastricht. The airport is very small. There are only a few connections to Maastricht-Aachen Airport.

Take Part!

Due to local COVID regulations there are a limited number of places available and we are close to that limit. Please contact the organisers per email to take part.

Support for young researchers

We can financially support young researchers wishing to attend the workshop. These bursaries can only be offered to researchers based in Europe and we cannot support international travel. Please send your application consisting of CV, the name of a promoter, and budget plan to totzke[ät]liverpool.ac.uk by 1st March 2022.

Posters

Post-Docs and PhD students will have an opportunity to present a poster on their research during the workshop. We kindly ask participants to submit a title and abstract by email (totzke[ät]liverpool.ac.uk), as soon as possible.

Confirmed Participants

• Balachander, Mrudula ULB, Belgium
• Bordais, Benjamin Université Paris-Saclay, CNRS, France
• Bose, Sougata University of Liverpool, United Kingdom
• Brice, Léonard ULB, Belgium
• Bruyère, Véronique Université de Mons (UMONS), Belgium
• Dantam, sai teja Mohan University of Edinburgh
• Edhan, Omer University of Manchester
• Flesch, János Maastricht University, The Netherlands
• Filiot, Emmanuel ULB, Belgium
• Gimbert, Hugo CNRS, LaBRI, Bordeaux, France
• Hellman, Ziv Bar-Ilan University
• Horn, Florian IRIF - Université de Paris and CNRS
• Ibsen-Jensen, Rasmus University of Liverpool, United Kingdom
• Kiefer, Stefan University of Oxford, United Kingdom
• Kučera, Antonín, Masaryk University, Czech Republic
• Main, James C. A. F.R.S.-FNRS & Université de Mons (UMONS), Belgium
• Mayr, Richard University of Edinburgh, United Kingdom
• Munday, Eric University of Edinburgh
• Oliu-Barton, Miquel Paris Dauphine, France
• Paul, Soumyajit LaBRI, and IRIF, France
• Peretz, Ron Bar Ilan University, Israel
• Predtetchinski, Arkadi Maastricht University, The Netherlands
• Randour, Mickael F.R.S.-FNRS & UMONS - Université de Mons, Belgium
• Raskin, Jean-Francois Université libre de Bruxelles (ULB), Belgium
• Saona, Raimundo IST Austria
• Schröder, Marc Maastricht University
• Seel, Christian Maastricht University, The Netherlands
• Shirmohammadi, Mahsa CNRS & IRIF, France
• Tamines, Clément F.R.S.-FNRS & Université de Mons (UMONS), Belgium
• Thuijsman, Frank Maastricht University, The Netherlands
• Totzke, Patrick University of Liverpool, United Kingdom
• Vandenhove, Pierre Université de Mons & Université Paris-Saclay
• Venel, Xavier LUISS Guido Carli University, Italy
• Vermeulen, Dries Maastricht University, The Netherlands
• Vigeral, Guillaume CNRS & CEREMADE, Paris Dauphine, France
• de Vos, Wout Tilburg University
• Ziliotto, Bruno CNRS & CEREMADE, Paris Dauphine, France
• Zseleva, Anna Maastricht University, The Netherlands

Programme

Thursday, April 07

 09:00-09:20 Welcome 09:20-09:55 Stefan Kiefer How to Play in Infinite MDPs abstract slides Markov decision processes (MDPs) are a standard model for dynamic systems that exhibit both stochastic and nondeterministic behavior. For MDPs with finite state space it is known that for a wide range of objectives there exist optimal strategies that are memoryless and deterministic. In contrast, if the state space is infinite, optimal strategies may not exist, and optimal or ε-optimal strategies may require (possibly infinite) memory. In this talk we consider qualitative objectives: reachability, safety, (co-)Büchi, and other parity objectives. The aim is to give an introduction to a collection of techniques that allow for the construction of strategies with little or no memory in countably infinite MDPs. Joint work with Richard Mayr, Mahsa Shirmohammadi, Patrick Totzke, Dominik Wojtczak. 09:55-10:30 Mahsa Shirmohammadi Büchi Objectives in Countable MDPs abstract slides I will talk about countably infinite Markov decision processes with Büchi objectives, which ask to visit a given subset F of states infinitely often. This talk is based on a collaboration with Stefan Kiefer, Patrick Totzke and Richard Mayr, with the full version here. A question left open by T.P. Hill in 1979 is whether there always exist epsilon-optimal Markov strategies, i.e., strategies that base decisions only on the current state and the number of steps taken so far. I will talk about the negative answer we provided to this question, and give intuitions on why Markov strategies with only 1 bit of extra memory are sufficient. 10:30-11:00 Coffee break 11:00-11:35 Richard Mayr Stochastic Reachability Games on Countably Infinite Graphs abstract slides We consider both concurrent and turn-based 2-player stochastic games on countably infinite game graphs. The reachability objective is arguably one of the most basic objectives in graph games: A play satisfies the objective iff it ever visits a given set of target states. The Maximizer (resp. Minimizer) player aims to maximize (resp. minimize) the probability of the objective, i.e., the probability of visiting the set of target states. We give an overview about how much memory is required for good Maximizer (resp. Minimizer) strategies in different variants of reachability games, where "good" means epsilon-optimal or optimal, respectively. The full version is here. 11:35-12:10 Xavier Venel Long-term Values in Markov Decision Processes. abstract slides We study Markov Decision Processes with stage payofss. In the litterature, many different ways to aggregate these stage payoffs have been introduced: finite average, discounted average, weighted average, inferior limit of the mean average,... I will present several recent results obtained in a series of papers proving links between these notions for MDP with finite state space and finite actions spaces. 12:10-14:00 Lunch 14:00-14:35 Frank Thuijsman The cycle value for matrix games abstract We consider matrix games from the dynamic perspective where players move in turns and actions chosen remain fixed for two stages. So, for a game based on matrix $$A$$, player 1 initiates play by choosing row $$i_1$$, next player 2 chooses column $$j_1$$ and player 1 receives $$A(i_1,j_1)$$ by player 2, next player 1 chooses row i_2 and receives $$A(i_2,j_1)$$, next player 2 chooses $$j_2$$ and player 1 receives $$A(i_2,j_2)$$, and so on. We assume that player 1 wants to maximize the long run average of $$A(i_1,j_1), A(i_2,j_1), A(i_2,j_2), A(i_3,j_2), A(i_3,j_3), A(i_4,j_3), ...$$, while player 2 wants to minimize that number. We show that this cycle value exists and can be achieved by pure strategies. However, it is not immediately obvious how to explicitly find these. 14:35-15:10 Antonín Kučera Solution Concepts and Objectives in Adversarial Patrolling Games abstract In adversarial patrolling, a mobile Defender strives to discover possible ongoing intrusions at vulnerable targets initiated by the Attacker. The task of constructing an optimal Defender's strategy achieving the optimal protection can be formalized as a graph game of two players. In the talk, we overview the basic solution concepts, strategy synthesis techniques, and discuss the role of randomization. 15:10-15:45 Coffee break with posters 15:45-16:20 Miquel Oliu-Barton The value of a stochastic game abstract In this talk I will present two results: first, an explicit formula for their value, second, an algorithm that is derived from it and which is "as efficient as it can be". References 16:20-16:55 Ron Peretz Asynchronous DeGroot Dynamics abstract slides We analyze an asynchronous variation of the DeGroot dynamics. In this model, we study the convergence of opinions, the consensus of the limiting opinions, and the ability to aggregate information (wisdom of the crowd''). The results are obtained for both finite and infinite networks. We provide estimates of the speed of convergence in finite networks. A fragmentation process of independent interest is found to be closely related to the asynchronous DeGroot, and this relation is the basis of our analysis. Joint with Galit Ashkenazi-Golan, Dor Elboim, and Yuval Peres. 19:00-22:00 Dinner

Friday, April 08

 09:20-09:55 Jean-François Raskin The Adversarial Stackelberg Value in Quantitative Games abstract In this talk, we will present the notion of adversarial Stackelberg value for two-player non-zero sum games played on bi-weighted graphs with the mean-payoff and the discounted sum functions. The adversarial Stackelberg value of Player 0 is the largest value that Player 0 can obtain when announcing her strategy to Player 1 which in turn responds with any of his best response. For the mean-payoff function, we show that the adversarial Stackelberg value is not always achievable but ε-optimal strategies exist. We show how to compute this value and prove that the associated threshold problem is in NP. For the discounted sum payoff function, we draw a link with the target discounted sum problem which explains why the problem is difficult to solve for this payoff function. We also provide solutions to related gap problems. 09:55-10:30 Véronique Bruyère Stackelberg-Pareto Synthesis abstract slides In this talk, we present the framework of two-player Stackelberg games played on graphs in which Player 0 announces a strategy and Player 1 responds rationally with a strategy that is an optimal response. While it is usually assumed that Player 1 has a single objective, we consider here the new setting where he has several. In this context, after responding with his strategy, Player 1 gets a payoff in the form of a vector of Booleans corresponding to his satisfied objectives. Rationality of Player 1 is encoded by the fact that his response must produce a Pareto-optimal payoff given the strategy of Player 0. We study the Stackelberg-Pareto Synthesis problem which asks whether Player 0 can announce a strategy which satisfies his objective, whatever the rational response of Player 1. We solve this problem for several types of omega-regular objectives. In particular, for games with parity objectives, we show that this problem is fixed-parameter tractable and NEXPTIME-complete. We also discuss the related Stackelberg-Pareto Verification problem which asks whether a given finite-memory strategy announced by Player 0 satisfies his objective, whatever the rational response of Player 1. 10:30-11:00 Coffee break 11:00-11:35 Bruno Ziliotto Percolation games abstract Percolation games is a new class of zero-sum stochastic games, where two players move in turn a ball on the vertices of $$\mathbb{Z}^d$$. The stage payoff is stochastic and its distribution depends on the position of the ball. Conditions are provided under which the $$n$$-stage game value has a limit. Connections to first-passage percolation and homogenization of Hamilton-Jacobi equations are discussed. This talk is based on several works with Guillaume Garnier, Luc Attia and Raimundo Saona. 11:35-12:10 Rasmus Ibsen-Jensen Absorbing games with a clock and two bits of memory abstract This talk will consider a special case of two-player zero-sum concurrent stochastic mean-payoff games, called absorbing games, where only one state can be left once entered. It will show that such games have epsilon-optimal strategies that uses a clock (=knowledge of how many rounds into the game we are) and 2 bits of memory. In contrast, it is known that in concurrent stochastic mean payoff games (or absorbing games for that matter), in general, a strategy that uses finite memory without a clock or a clock and deterministically updated finite memory cannot ensure anything. 12:10-14:00 Lunch 14:00-14:35 Guillaume Vigeral Zero-sum stochastic games with intermittent observation of the state abstract We consider asymptotics of finite zero-sum stochastic games. Bewley and Kohlberg established in 1976 that the discounted value converges when players fully observe the state ; in 2016 Ziliotto constructed an example in which the state is not observed at all and the value oscillates. We investigate the middle ground in which the state is publicly observed at some stages, chosen either deterministically or randomly. 14:35-15:10 Dries Vermeulen A competitive search game with a moving target abstract We introduce a discrete-time search game, in which two players compete to find an invisible object first. The object moves according to a time-varying Markov chain on finitely many states.The players are active in turns. At each period, the active player chooses a state. If the object is there then he finds the object and wins. Otherwise the object moves and the game enters thenext period. We show that this game admits a value, and for any error-term $$\varepsilon > 0$$, each player has a pure (subgame-perfect) $$\varepsilon$$-optimal strategy. Interestingly, a 0-optimal strategy does not always exist. We derive results on the analytic and structural properties of the value and the $$\varepsilon$$-optimal strategies. We devote special attention to the important time-homogeneous case, where additionalresults hold. 15:10-15:45 Coffee break 15:45-16:20 Hugo Gimbert Controlling a population abstract This talk focuses on the population control problem. In this setting, an arbitrarily large yet finite population of agents is controlled in a dynamic way, in an infinite sequence of steps. Each agent is modelled as a copy of a finite-state system. At each step, a controller applies the same action to every agent, and every agent performs a local transition. This setting is inspired by the control of a biological system, namely a population of yeasts, where the controller may only change the environment common to all cells. We review some recent decidability and complexity results in two cases: when agents are adversarially strategic (zero-sum game) and when they behave randomly (Markov Decision Process). We also present some open questions. References 16:20-16:55 Anna Zseleva The Feasible Set and Folk Theorems for Infinitely Repeated Games with Switching Costs abstract We study infinitely repeated games where each player has to pay a certain (switching) cost each time she changes her action. We show that whenever the players are patient enough, the Folk Theorem holds. Moreover, we show that if the switching costs are symmetric, the set of equilibrium payoffs is obtained by considering the payoffs of a simple one-shot auxiliary game. This generalizes previous studies of the equilibrium payoffs in limit cases and under the assumption that switching costs are independent of the actions being switched. Finally, we show that the introduction of switching costs have a negative impact on a player in the infinitely repeated game but can be beneficial for him in a finitely repeated game.