Aggregative game
Aggregative game
In game-theory, an aggregative game (AG), sometimes called a summarization game,
Definitions Consider a standard non-cooperative game with n players, where the set of strategies (actions) available to each player i is denoted by Si. The set of all possible strategy profiles is denoted by <math> S=S_1 \times S_2 \times \ldots \times S_n </math>. Each player i has a payoff function and <math>u_i:S \to \mathbb{R} </math>.
Simple aggregative game In the simplest setting studied in the context of aggregative games, Si is assumed to be one-dimensional, <math> S_i \subseteq \mathbb{R} </math>. That is, the game amounts to each player choosing a number simultaneously with the others. The game is called an aggregative game if for each player i there exists a function <math>\tilde{u}_i:S_i \times \mathbb{R} \to \mathbb{R} </math> such that for all <math> s \in S </math>:
: <math> u_i(s)=\tilde{u}_i \left( s_i,\sum_{j=1}^n s_j \right) </math>
In words, payoff functions in aggregative games depend on players' own strategies and the aggregate <math>\sum s_j</math>.
Generalized aggregative game The definition can be generalized by replacing the sum with an arbitrary increasing function. Formally, in a generalized aggregative game (with one-dimensional strategy sets), there exists a function <math>\tilde{u}_i:S_i \times \mathbb{R} \to \mathbb{R} </math> and a function <math> g:S \to \mathbb{R} </math> such that for all <math> s \in S </math>:<blockquote><math>u_i(s)=\tilde{u}_i \left( s_i, g(s) \right)</math>.</blockquote>Usually, it is assumed that the aggregator function is additively separable, that is, <blockquote><math> g(s_1,\ldots,s_n)=g_0(\sum_{j=1}^n g_j(s_j)) </math></blockquote>where <math> g_j: S_j\to \mathbb{R} </math> and <math> g_0: \mathbb{R}\to \mathbb{R} </math>. So<blockquote><math>u_i(s)=\tilde{u}_i \left( s_i, g_0(\sum_{j=1}^n g_j(s_j)) \right)</math>,</blockquote>Obviously, any aggregative game is generalized aggregative as seen by taking <math> g(s_1,\ldots,s_n)=\sum s_i </math>.
Quasi-aggregative games The definition above can be further generalized by allowing each player to have a different aggregation function. This is called a quasi-aggregative game.
Multi-dimensional aggregators The aggregator can be multi-dimensional: <math> g:S \to \mathbb{R}^d </math>, for some positive integer d. Each coordinate of the aggregator is assumed to be additively-separable, that is:<blockquote><math>g(s) = [g^1(s),\ldots,g^d(s)]</math> where
<math>g^k(s) = g^k_0(\sum_{j=1}^n g^k_j(s_j))</math></blockquote>where <math> g^k_j: S_j\to \mathbb{R} </math> and <math> g^k_0: \mathbb{R}\to \mathbb{R} </math>. This class of games generalizes both anonymous-games and weighted congestion-games:
- In anonymous games, each player selects an action from a finite set, and the utility of each player depends only on his own action and on the number of players who chose each action. Here, d is the number of actions, and for each action k, <math> g^k_j(s_j) </math> is 1 if <math> s_j=k </math> and 0 otherwise, and <math> g^k_0 </math> is the identity function. Hence, <math> g^k(s) </math> is the number of players who chose action k in profile s.
- In weighted congestion games, each player selects a subset of resources from a finite family of allowed subsets, and the utility of each player depends only on his own subset and on the total weight of players who choose each resource. Here, d is the number of resources, and for each resource k, <math> g^k_j(s_j) </math> is the weight of player j if <math> k \in s_j </math> and 0 otherwise, and <math> g^k_0 </math> is the identity function.
Example Consider the Cournot competition, where firm i has payoff/profit function <math>u_i(s)=s_i P\left(\sum s_j\right)-C_i(s_i) </math> , where:
- <math> P </math> is the inverse demand function - it maps the total supplied amount to the price of the product;
- <math> C_i </math> is the cost function of firm i.
This is an aggregative game since <math>u_i(s)=\tilde{u}_i\left(s_i,\sum s_j\right) </math> where <math>\tilde{u}_i(s_i,X)=s_i P(X)-C_i(s_i) </math>.
Some other natural examples of aggregative games are:
- public-goods-game - the utility of each player depends only on his own contribution to the public good, and on the total amount contributed by all players.
- Pollution game - the utility of each player depends on its own pollution level and on the total pollution level.
- congestion-game - the utility of each player depends on his own choice, and the aggregate congestion caused by others.
- bertrand-competition. Backward reply correspondences also play a crucial role for comparative statics analysis (see below).
- Quasi-AGs (hence generalized AGs, hence AGs) are best-response potential games if best-response correspondences are either increasing or decreasing.
History of terms and results ### One-dimensional strategy sets, one-dimensional aggregators AGs were first studied under the assumption that the players' strategy sets are one-dimensional (each player chooses a number), and the aggregator function is also one-dimensional (returns a number).
Dubey, Mas-Collel and Shubik (1980) presented the "Aggregation Axiom": each player's utility depends on the player's action and the sum of all players' actions.
Corchon (1994) refining a previous result by Novshek (1985), studied one-dimensional AGs in which the best-response correspondence of each player has a selection that is a decreasing function (a condition called weak strategic substitute). He proved that such games always have a PNE. The proof proceeds in three propositions: (1) for a special case that the strategies of each player are integers in the interval [0,m]; (2) for a more general case that the strategies are arbitrary integers; (3) for the most general case that the strategies are real numbers. He claims that Proposition 1 can be easily adapted to multi-dimensional strategy sets.
Kukushkin (2004), studied one-dimensional AGs where the strategy sets are finite subsets of R. He proved that, if one of three single crossing conditions is satisfied, then every best response improvement path leads to a NE.
Dubey, Haimanko and Zapecheinyuk (2006) studied one-dimensional AGs with quasiconcave utilities. They showed that the better-reply dynamics converges globally to a nash-equilibrium if actions are either strategic substitutes or strategic-complements for all players around each asymptotically-stable equilibrium. In contrast, if the derivatives of the best-reply functions have different signs, then the better-reply dynamics might not converge even with two players. They assumed that the player who deviates, as well as the improving strategy he deviates too, are determined randomly.
Cornes and Hartley (2012) assume that each player can choose one of two actions "0" or "1" (they say their results are easily generalizable to a finite set of actions). The utility of each player depends only on his own action and some global summarization function, which is allowed to be asymmetric and non-linear (they use the term summarization game instead of aggregative game). They also assume that (a) each player has a bounded influence on the summarization function; (b) the utility functions are continuous and have bounded derivatives; (c) the summarization function and payoff functions are computable in unit time. Under these assumptions, they present polytime algorithms for computing and learning approximate Nash equilibria. They also compute the rates of convergence as a function of the influence bound, the derivatives of utility functions, and the population size.
Babichenko (2013) assume that each player can choose any real number in [0,1]. The utility of each player depends only on his own action and some aggregator function (the same for all players). He assumes that the aggregator function is Lipschitz continuous w.r.t. the individual actions, and each utility function is Lipschitz continuous w.r.t. the aggregator, which means that each player has a bounded influence on the aggregation and on the utilities of others. He calls such game quasi-aggregative. He presents a best-reply dynamic that converges to approximate NE in O(n log n) steps.
Siddharth Barman and katrina-ligett (2015) presented a polytime algorithm for computing an approximate correlated-equilibrium with near-optimal welfare in aggregative games.
One-dimensional strategy sets, multi-dimensional aggregators Cummings, Kearns, Roth and Wu (2015)-2020) study network AGs --- a variant in which the utility of each agent depends on his own strategy and on an aggregate function of his neighbors in the network (in the earlier version present a distributed algorithm for multi-dimensional AGs; they show that their algorithm has linear convergence. However, it converges to an approximate PNE, not to the exact PNE.
Deng and Nian (2018) study AGs over weight-balanced digraphs. Players have constraint sets coupled by linear constraints. Players have first-order dynamics and are influenced by exogenous disturbances. For the case of smooth utility functions, they present a distributed continuous-time algorithm for finding the variational generalized NE. Their algorithm uses gradient descent, internal model and dynamic average consensus. They prove that their algorithm converges exponentially to the variational generalized NE. Later, Deng and Nian (2020) they study non-smooth utility functions. They present another continuous-time algorithm that converges asymptotically, but without guarantees in its convergence rate. Deng (2022) extends these results to players with second-order dynamics, but no constraints on the strategy sets. He presents two continuous-time algorithms with exponential convergece rates.
Wang, Chen, Chen, Lian and Wang extend the above works to weight-unbalanced digraphs; they, too, develop continuous-time algorithms with exponential convergence.
Li, Li, Rang, Zheng and Huang (2025) survey distributed algorithms for aggregative games with multi-dimensional strategy sets. Importantly, they assume that there is a fixed undirected network that describes the possible connections between players, and each player can communicate only with its neighbors in the network.
Wang and Nedic (2024) present a distributed and privacy-preserving algorithm for computing a NE in an aggregative game over a network. In their model, the strategy sets are multi-dimensional - compact and convex subsets of Rd. The aggregator function is the arithmetic mean (or the sum) of all strategy vectors. The utility functions are continuously differentiable, and they are concave functions of the player's action (equivalently, the cost is a convex function of the player's action). Their algorithm guarantees both a finite "privacy budget" and accurate convergence.
Mean field games Mean field game can be seen as the limit of an aggregative game when the number of players goes to infinity. Thus, the effect of each player on the aggregate is negligible.
Acemoglu and Jensen (2010) generalized aggregative games to allow for infinitely many players, in which case the aggregator will typically be an integral rather than a linear sum.
Grammatico, Parise, Colombino and Lygeros (2016) present distributed algorithms for computing PNE in mean-field games. Their main contribution over previous work is that they allow agents to have different constraints (as long as the constraint set of each agent is convex). They present a decentralized algorithm for computing a mean field PNE.