Continuous game
Continuous game
A continuous game is a mathematical concept, used in game-theory, that generalizes the idea of an ordinary game like tic-tac-toe (noughts and crosses) or checkers (draughts). In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite.
In general, a game with uncountably infinite strategy sets will not necessarily have a nash-equilibrium solution. If, however, the strategy sets are required to be compact and the utility functions continuous, then a Nash equilibrium will be guaranteed; this is by Glicksberg's generalization of the Kakutani fixed point theorem. The class of continuous games is for this reason usually defined and studied as a subset of the larger class of infinite games (i.e. games with infinite strategy sets) in which the strategy sets are compact and the utility functions continuous.
Formal definition Define the n-player continuous game <math> G = (P, \mathbf{C}, \mathbf{U}) </math> where
::<math>P = {1, 2, 3,\ldots, n}</math> is the set of <math>n\, </math> players, :: <math>\mathbf{C}= (C_1, C_2, \ldots, C_n) </math> where each <math>C_i\, </math> is a compact set, in a metric space, corresponding to the <math>i\, </math> th player's set of pure strategies, :: <math>\mathbf{U}= (u_1, u_2, \ldots, u_n) </math> where <math>u_i:\mathbf{C}\to \R</math> is the utility function of player <math>i\, </math> : We define <math>\Delta_i\, </math> to be the set of Borel probability measures on <math>C_i\, </math>, giving us the mixed strategy space of player i. : Define the strategy profile <math>\boldsymbol{\sigma} = (\sigma_1, \sigma_2, \ldots, \sigma_n)</math> where <math>\sigma_i \in \Delta_i\, </math> Let <math>\boldsymbol{\sigma}_{-i}</math> be a strategy profile of all players except for player <math>i</math>. As with discrete games, we can define a best-response correspondence for player <math>i\, </math>, <math>b_i\ </math>. <math>b_i\, </math> is a relation from the set of all probability distributions over opponent player profiles to a set of player <math>i</math>'s strategies, such that each element of
:<math>b_i(\sigma_{-i})\, </math> is a best response to <math>\sigma_{-i}</math>. Define
:<math>\mathbf{b}(\boldsymbol{\sigma}) = b_1(\sigma_{-1}) \times b_2(\sigma_{-2}) \times \cdots \times b_n(\sigma_{-n})</math>. A strategy profile <math>\boldsymbol{\sigma}</math> is a [[nash-equilibrium]] if and only if <math>\boldsymbol{\sigma} \in \mathbf{b}(\boldsymbol{\sigma}*)</math> The existence of a Nash equilibrium for any continuous game with continuous utility functions can be proven using Irving Glicksberg's generalization of the Kakutani fixed point theorem. In general, there may not be a solution if we allow strategy spaces, <math>C_i\, </math>'s which are not compact, or if we allow non-continuous utility functions.
Separable games A separable game is a continuous game where, for any i, the utility function <math>u_i:\mathbf{C}\to \R</math> can be expressed in the sum-of-products form: : <math>u_i(\mathbf{s}) = \sum_{k_1=1}^{m_1} \ldots \sum_{k_n=1}^{m_n} a_{i\, ,\, k_1\ldots k_n} f_1(s_1)\ldots f_n(s_n)</math>, where <math>\mathbf{s} \in \mathbf{C}</math>, <math>s_i \in C_i</math>, <math>a_{i\, ,\, k_1\ldots k_n} \in \R</math>, and the functions <math>f_{i\, ,\, k}:C_i \to \R</math> are continuous. A polynomial game is a separable game where each <math>C_i\, </math> is a compact interval on <math>\R\, </math> and each utility function can be written as a multivariate polynomial.
In general, mixed Nash equilibria of separable games are easier to compute than non-separable games as implied by the following theorem: :For any separable game there exists at least one Nash equilibrium where player i mixes at most <math>m_i+1\, </math> pure strategies. Whereas an equilibrium strategy for a non-separable game may require an uncountably infinite support, a separable game is guaranteed to have at least one Nash equilibrium with finitely supported mixed strategies.
Examples ### Separable games #### A polynomial game Consider a zero-sum 2-player game between players X and Y, with <math>C_X = C_Y = \left [0,1 \right ] </math>. Denote elements of <math>C_X\, </math> and <math>C_Y\, </math> as <math>x\, </math> and <math>y\, </math> respectively. Define the utility functions <math>H(x,y) = u_x(x,y) = -u_y(x,y)\, </math> where
:<math>H(x,y)=(x-y)^2\, </math>.
The pure strategy best response relations are:
:<math>b_X(y) = \begin{cases} 1, & \mbox{if }y \in \left [0,1/2 \right ) \\ 0\text{ or }1, & \mbox{if }y = 1/2 \\ 0, & \mbox{if } y \in \left (1/2,1 \right ]
\end{cases}</math>
:<math>b_Y(x) = x\, </math>
<math>b_X(y)\, </math> and <math>b_Y(x)\, </math> do not intersect, so there is no pure strategy Nash equilibrium. However, there should be a mixed strategy equilibrium. To find it, express the expected value, <math> v = \mathbb{E} [H(x,y)]</math> as a linear combination of the first and second moments of the probability distributions of X and Y:
: <math> v = \mu_{X2} - 2\mu_{X1} \mu_{Y1} + \mu_{Y2}\, </math>
(where <math>\mu_{XN} = \mathbb{E} [x^N]</math> and similarly for Y).
The constraints on <math>\mu_{X1}\, </math> and <math>\mu_{X2}</math> (with similar constraints for y,) are given by Hausdorff as:
: <math> \begin{align} \mu_{X1} \ge \mu_{X2} \\ \mu_{X1}^2 \le \mu_{X2} \end{align} \qquad \begin{align} \mu_{Y1} \ge \mu_{Y2} \\ \mu_{Y1}^2 \le \mu_{Y2} \end{align} </math>
Each pair of constraints defines a compact convex subset in the plane. Since <math>v\, </math> is linear, any extrema with respect to a player's first two moments will lie on the boundary of this subset. Player i's equilibrium strategy will lie on
: <math>\mu_{i1} = \mu_{i2} \text{ or } \mu_{i1}^2 = \mu_{i2} </math>
Note that the first equation only permits mixtures of 0 and 1 whereas the second equation only permits pure strategies. Moreover, if the best response at a certain point to player i lies on <math>\mu_{i1} = \mu_{i2}\, </math>, it will lie on the whole line, so that both 0 and 1 are a best response. <math>b_Y(\mu_{X1},\mu_{X2})\, </math> simply gives the pure strategy <math>y = \mu_{X1}\, </math>, so <math>b_Y\, </math> will never give both 0 and 1. However <math>b_x\, </math> gives both 0 and 1 when y = 1/2. A Nash equilibrium exists when:
: <math> (\mu_{X1}, \mu_{X2}, \mu_{Y1}, \mu_{Y2}) = (1/2, 1/2, 1/2, 1/4)\, </math>
This determines one unique equilibrium where Player X plays a random mixture of 0 for 1/2 of the time and 1 the other 1/2 of the time. Player Y plays the pure strategy of 1/2. The value of the game is 1/4.
Non-Separable Games #### A rational payoff function Consider a zero-sum 2-player game between players X and Y, with <math>C_X = C_Y = \left [0,1 \right ] </math>. Denote elements of <math>C_X\, </math> and <math>C_Y\, </math> as <math>x\, </math> and <math>y\, </math> respectively. Define the utility functions <math>H(x,y) = u_x(x,y) = -u_y(x,y)\, </math> where
:<math>H(x,y)=\frac{(1+x)(1+y)(1-xy)}{(1+xy)^2}. </math>
This game has no pure strategy Nash equilibrium. It can be shown that a unique mixed strategy Nash equilibrium exists with the following pair of cumulative distribution functions:
: <math>F^(x) = \frac{4}{\pi} \arctan{\sqrt{x}} \qquad G^(y) = \frac{4}{\pi} \arctan{\sqrt{y}}. </math>
Or, equivalently, the following pair of probability density functions:
: <math>f^(x) = \frac{2}{\pi \sqrt{x} (1+x)} \qquad g^(y) = \frac{2}{\pi \sqrt{y} (1+y)}. </math>
The value of the game is <math>4/\pi</math>.