Game_Theory - Atlas of Economic Models
 

The Basics of Game Theory

Introduction

The definitions are discussion in this section are (at least for the present) kept fairly informal as the purpose is to provide a quick reference for the basic concepts rather than a formally rigorous presentation of Game Theory.

Definitions

Definition (Information Set): An information set is a collection of nodes such that at all nodes it is the same player whose turn it is to move and that player does not know at which of the nodes in that collection they are actually at.

Definition (Perfect and Imperfect Information): A game is one of perfect information if each information set contains a single node, otherwise it is a game of imperfect information.

Definition (Dominant Strategy): a strategy is dominant for a player if it performs at least as well as any other available strategy against any of the opponent(s)' strategies.

Definition (Dominated Strategy): a strategy is weakly dominated if for any given strategies chosen by other players there exists another strategy which does at least as well AND for at least one set of opponent strategies for which there is an alternative that does strictly better. A strictly dominated strategy is similar to weakly dominated except that the inequalities are strict.

Definition (Complete Information): a game has complete information if all information both about other players and payoffs are common knowledge. Otherwise the game is one of incomplete information.

Remark: any game of incomplete information can be reformulated as one of imperfect information by the addition of extra player 'Nature' who makes the first move and determines (according to the common prior probability distribution) the 'preference type' of each of the different players (with each player's type known only to that player).

Evolutionary Game Theory

An excellent, and more lengthy, introduction to this subject can be found freely online at http://plato.stanford.edu/entries/game-evolutionary

Consider a population playing some kind of game with payoff function $$\pi$$.

Informally: a strategy $$s$$ is evolutionarily stable (ES) if a population playing $$s$$ is robust to invasion by any 'mutant' strategy $$d$$.

More formally: for small $$p$$ the expected payoff of $$s$$ must be strictly greater than $$d$$. This occurs if:

  • EITHER: $$\pi(s,s) > \pi(s,d)$$ (remember $$p$$ is very small) -- $$s$$ does strictly better against 'another' $$s$$ than $$d$$ does.

  • OR: $$\pi(s,s) = \pi(s,d)$$ AND $$\pi(s,d) > \pi(d,d)$$ -- $$d$$ does as well as $$s$$ against $$s$$ but $$s$$ does better against $$d$$ than $$d$$ does.

Replicator Dynamics

It is also natural to create an actual dynamical system in which the proportion of a given strategy in the population evolves in relation to the relative fitness (payoffs) of those strategies. Consider the 2 strategy case just discussed (generalization to $$n$$ strategies is trivial). Let $$p_{s}, p_{d}$$ indicate the respective proportions of a given strategy in the (fixed) population. Let $$\Pi_{x}(p_{s},p_{d}$$ indicate the expected payoffs (fitness) of strategy $$x$$ given these proportions and that there is an equal probability of meeting any other member of the population. Define $$\bar{\Pi}$$ to be the average fitness. Then, one can define a dynamical system by making the change in population proportions equal to the product of the current population times the fitness advantage (or disadvantage):

$$ \[ \frac{d p_{x}}{d t} = \frac{p_{x} (\Pi_{x} - \bar{\Pi}}{\bar{\Pi}} \] $$

Remark: with just 2 strategies one can set up a correspondence between steady-states of the replicator dynamics and evolutionary stable strategies. However this can break down with more than 2 strategies.

More General Dynamics

The Replicator setup assumes that each member of the population has an equal probability of meeting any other. It also assumes a specific (linear) form for the dynamical system. Relaxing either of these can lead to more complex situations.

For example consider the following discrete time dynamics (this follows Nowak and May 1992):

  • Take the classic prisoners dilemma game
  • Play it on the 2-d lattice where each player plays against their (8) nearest neighbours
  • At the end of each term a player adopts the strategy of their most successful neighbour

For appropriate selections of the payoffs of the PD game one can achieve either:

  • Convergence to ES state: defect, defect
  • Cycles in which nodes switch from defecting to cooperating
  • 'Chaotic' dynamics i.e. even after a large number of rounds no cycle or steady-state has been reached (in a finite-node game one is guaranteed to eventually reach a cycle or steady-state).