High Signal | Low Signal | |
---|---|---|
$V$=1 | $p$ | $1-p$ |
$V$=0 | $1-p$ | $p$ |
Each agent's action is a function of previous action and her own signal $a_i(A_{i-1},s_i)$
### Simulation
n_sim = 10 # nb of simulations
p= 1/2+0.000000001 # prob(H|V=1)
nb = 20 # nb of decision maker
#pb_cascade=[]
#pb_c_cascade=[]
#pb_w_cascade=[]
for i in range(n_sim):
nb_adopt = 0
adopt=[]
sig=np.random.choice(2,nb,p)
for ii in range(nb):
adopt.append(adopt_rule(nb_adopt,ii,sig[ii],p))
nb_adopt=nb_adopt+adopt_rule(nb_adopt,ii,sig[ii],p)
print('Total number of adoption is '+str(nb_adopt))
print('The sequence of decision is '+str(adopt))
Total number of adoption is 20 The sequence of decision is [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] Total number of adoption is 2.0 The sequence of decision is [0, 0.5, 0.5, 0.5, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] Total number of adoption is 0 The sequence of decision is [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] Total number of adoption is 17.0 The sequence of decision is [1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] Total number of adoption is 20 The sequence of decision is [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] Total number of adoption is 19.5 The sequence of decision is [1, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] Total number of adoption is 19.5 The sequence of decision is [1, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] Total number of adoption is 20 The sequence of decision is [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] Total number of adoption is 19.5 The sequence of decision is [1, 0.5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] Total number of adoption is 20 The sequence of decision is [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
correct cascade | no cascade | wrong cascade |
---|---|---|
$\frac{p(1+p)}{2}$ | $p-p^2$ | $\frac{(p-2)(p-1)}{2}$ |
...
correct cascade | no cascade | wrong cascade |
---|---|---|
$\frac{p(p+1)[1-(p-p^2)^{n/2}]}{2(1-p+p^2)}$ | $(p-p^2)^{n/2}$ | $\frac{(p-2)(p-1)[1-(p-p^2)^{n/2}]}{2(1-p+p^2)}$ |
# Plot
plt.figure(figsize=[5,5])
p = np.linspace(0.5001,0.9999,10)
n_list = np.array([4,6,8])
for n in n_list:
prob_co = p*(p+1)*(1-(p-p**2)**(n/2))/2/(1-p+p**2)
prob_wr = (p-2)*(p-1)*(1-(p-p**2)**(n/2))/2/(1-p+p**2)
plt.plot(p,prob_co,label="Correct Cascade; n="+str(n))
plt.plot(p,prob_wr,'--',label="Wrong Cascade; n="+str(n))
plt.legend(loc=0)
plt.xlabel('p')
plt.xlim([0.5,1])
plt.title("Probability of Cascade")
plt.show()
If condition 1 and 2 hold, as $n \rightarrow \infty$, $\text{prob(information cascade occurs)} \rightarrow 1$
An information cascade eventually begins.
Result 4. If all individuals' signals are drawn from the same distribution, then after the cascade has begun, all individuals welcome public information.
Result 5. The realse of a noisier public signal than private information can shatter a long-lasting cascade.
Millions of smoke need not discourage investigation of side effects of smoking.
Result 6. If there is a non-zero probability of release of public information before everyone makes the decision, then eventually the population settles into the correct cascade.