Cheap_Talk_Models - Atlas of Economic Models
 

Cheap Talk Models

Introduction

'Cheap Talk' models are a type of signaling model with the distinguishing feature that signals are (practically) costless to send. The credibility of the signal stems from the fact that both the informed party (sender) and the uninformed party (receiver) have an incentive to co-ordinate their actions, so using a costless signal may be preferred to using a costly signal as in the Spence_Signaling_Model. As with all signaling models, the 'receiver' takes different actions depending on the type of 'sender'. However, 'Cheap Talk' models differ to costly signaling models in that 'senders' of different types have different preferences over the actions of the 'receiver'. First I will present a simple example of the 'crew training' model, then the more generalised Crawford-Sobel Model.

Example: Crew Training

To take out an 'eight' requires 8 rowers and a cox. On the morning of training, individuals must make a decision $$ a_i $$ whether to show up $$ S $$ or not to show up $$ NS $$ . The cost of training $$ c_i $$ is private information and is a random variable with the uniform distribution $$ [0, 1+\varepsilon] $$ . If all 9 show up then the outing can go ahead and each participant gets utility 1 minus the cost of training. However, if just one person fails to show then the eight cannot go on the river and all those who have turned up get utility of 0 and incur the cost $$ c_i $$ . The person who didn't show still gets a utility of 0 but does not incur the cost of training. Formally, individual utility functions are given by:

$$ \[ U_i(a_1, ... ,a_9) := \left\{\begin{array}{l l} 1-c_i & \textrm{if } a_j=S \forallj, \\ -c_i & \textrm{if } a_i=S \textrm{ and } \existsa_j=NS, \\ 0 & \textrm{if } a_i=NS . \end{array}\right. \] $$

Equilibrium Without Signals

The only Nash equilibrium in the no-signal 'crew training' game is for all participants to choose $$ NS $$ . Let $$ \pi $$ be the equilibrium probability that an individual shows up for training. $$ E[U_i | a_i=S] = \pi^8 - c_i $$ so choosing $$ S $$ is optimal iff $$ c_i \le \pi^8 $$.

However, in order for $$ \pi $$ to be an equilibrium value it must also be that $$ \pi $$ is the probability that $$ c_i \le \pi^8 $$ , giving $$ \pi = \frac{\pi^8}{1+\varepsilon} $$ with the solution $$ \pi = 0 $$, i.e. in equilibrium the probability that an individual would play $$ a_i = S $$ is 0.

Equilibrium With Signals

If a (virtually) costless signal is introduced then this now becomes a two-stage game. Firstly, individuals send a signal $$ n_i(c_i) $$ (for example via email) to show which strategy they intend to play. An individual's equilibrium strategy is to signal $$ n_i = S $$ if $$ c_i < 1 $$ and $$ n_i = NS $$ otherwise. The second stage is for the individuals to choose their action $$ a_i(n_{-i}) $$ depending on the signals sent by all other participants. If all individuals signaled $$ n_i = S $$ in the first stage then everyone will play $$ a_i = S $$ and all will turn up for training. If one or more individuals signaled $$ n_i = NS $$ in the first stage then no-one will turn up as all will play $$ a_i = NS $$ .

However, there also exists a babbling equilibrium where the signals sent in the first stage are for some reason ignored, and the game reverts to the 'inefficient' $$ NS $$ Nash equilibrium of the no-signal problem even if all individuals actually signal $$ S $$ .

The Crawford-Sobel Model

There are two agents: the informed sender $$ S $$ and the uninformed receiver $$ R $$ . $$ S $$ observes the state of the world $$ t $$ such that $$ t \in [0,1] $$ , for example what type of agent they are, then sends $$ R $$ a signal $$ n $$ where $$ n \in [0,1] $$ . After receiving this signal, $$ R $$ forms his beliefs (prior) on the state of the world as given by the conditional cumulative density function $$ \gamma (t|n) $$ , and then takes some action $$ a(n) $$ which affects the payoffs of both $$ S $$ and $$ R $$:

  • $$ U_R(t,a) = -(a-t)^2 $$ $$ \Rightarrow $$ Concave in $$ a $$ with a maximum $$ U_S^{max} $$ where $$ a = t $$ .

  • $$ U_S(t,a) = -(a-t-b)^2 $$ $$ \Rightarrow $$ Concave in $$ a $$ with a maximum $$ U_R^{max} $$ where $$ a = t+b $$ and $$ b $$ is a constant measuring the degree of alignment of preferences between $$ S $$ and $$ R $$ (higher $$ b $$ means more divergent objectives) .

Solving

As with the Spence model, the equilibrium concept used is a Perfect Bayesian Equilibrium of a signal strategy $$ n*(t) $$ for the sender, an action strategy $$ a*(n) $$ for the receiver and the belief set $$ \gamma* (t|n) $$ such that:

  1. $$ n*(t) \in \arg \max U_S(t,a*(s)) $$ .
  2. $$ a*(n) \in \arg \max \int U_R(t,a)\gamma*(t|n)dt $$ .
  3. $$ \gamma* (t|n) $$ represents the correct belief given $$ n*(t) $$ .

This gives rise to partition equilibria in which the 'type' space $$ t \in [0,1] $$ is divided into $$ p $$ subdivisions where $$ p = 1,...,Z $$ . The signal $$ n $$ sent by the sender depends on which subdivision $$ t $$ falls into.

** Diagram **

$$ p=1 $$ is the 'babble' equilibrium, which always exists. This implies that a signal $$ n $$ only shows that $$ t \in [0,1] $$, so that no new information is gained by $$ R $$ after the signal has been sent.

If $$ p=2 $$ then there exists a partition $$ t_1 $$ creating two subdivisions $$ {[0,t_1] , [t_1,1]} $$ such that:

$$ \[ n*(t) := \left\{\begin{array}{l l} n*_1 & \textrm{if } t \in [0,t_1], \\ n*_2 & \textrm{if } t \in [t_1,1] . \end{array}\right. \] $$

The initial prior is given by:

$$ \[ \gamma*(t|n) := \left\{\begin{array}{l l} \frac{1}{t_1} & \textrm{if } n=n_1 & \textrm{and } t<t_1, \\ \frac{1}{1-t_1} & \textrm{if } n=n_2 & \textrm{and } t>t_1, \\ 0 & \textrm{otherwise} . \end{array}\right. \] $$

$$ R $$ will select the action which maximises their expected utility given the signal sent by $$ S $$ (condition (2) of a PBE) . As explained above $$ U_R $$ is maximised where $$ a = t $$ , or where $$ t $$ is revealed through the signal $$ n $$ where $$ a = E[t|n] $$ , giving:

$$ \[ a*(n*) := \left\{\begin{array}{l l} \frac{t_1}{2} & \textrm{if } n*=n*_1, \\ \frac{1+t_1}{2} & \textrm{if } n*=n*_2 . \end{array}\right. \] $$

It is now necessary to check that the proposed signal strategy $$ n*(t) $$ is optimal for $$ S $$ given $$ R's $$ action strategy $$ a*(n*) $$ . An agent of type $$ t_1 $$ must be indifferent between sending $$ n*=n*_1 $$ and $$ n*=n*_2 $$, thus $$ U_S(t_1, a*(n*_1) = -\left(\frac{t_1}{2}-t_1-b \right)^2 = -\left(\frac{1+t_1}{2}-t_1-b \right)^2 = U_S(t_1, a*(n*_2) $$ . This is solved for $$ t_1 $$ to find $$ t_1 = (1/2)-2b $$ which defines a partition equilibrium iff $$ b<(1/4) $$ , i.e. the interests of $$ S $$ and $$ R $$ must be sufficiently well aligned to make a 'non-babble' equilibrium (where both parties wish to co-ordinate their actions via a costless signal) possible.

Generally

More generally, an agent on the boundary between the $$ i^{th} $$ and the $$ (i+1)^{th} $$ partition must be indifferent between sending either signal, such that $$ -\left(\frac{t_i+t_{i+1}}{2}-t_i-b \right)^2 = -\left(\frac{t_{i-1}+t_i}{2}-t_i-b \right)^2 $$ . This gives $$ t_{i+1} = 2t_i-t_{i-1}+4b $$ , so setting $$ t_0 = 0 $$ gives the solution $$ t_i = it_1+2i(i-1)b $$ with $$ t_1 $$ chosen to ensure that $$ t_n = 1 $$ . This solution is only defined if $$ 2i(i-1)b < 1 $$ , which determines the maximum number of partitions $$ p*(b) $$ such that $$ p \le p*(b) $$ . $$ p*(b) $$ is decreasing in $$ b $$ , so as the interests of $$ S $$ and $$ R $$ become better aligned (falling $$ b $$) $$ S $$ sends $$ R $$ a signal which reveals their type $$ t $$ to a greater degree of accuracy (increasing $$ p $$). The equilibrium where $$ p = p*(b) $$ Pareto-dominates all others.

Finally the 'out-of-equilibrium' beliefs must be specified, i.e. what $$ R $$ will infer about $$ t $$ if they receive a signal $$ n' $$ which is not an equilibrium strategy for $$ S $$:

$$ \[ \gamma*(t|n) := \left\{\begin{array}{l l} \frac{1}{t_{i-1}+t_i} \forall t \in [t_{i-1},t_i) & \textrm{if } n \in [t_{i-1},t_i), \\ 0 & \textrm{otherwise } . \end{array}\right. \] $$

Applications and Extensions

ToDo

  • Insert partition equilibrium diagram