We have often pondered the question: when you find that you and someone else disagree, how much weight should you give to your and their opinions in forming your new opinion? To explore this, I’ve worked out a simple math model of disagreement between two "Bayesian wannabes", i.e., agents who are trying to act like Bayesians, but know that they make mistakes, and try to adjust for this fact.

Consider two agents, A and B, having a conversation about a truth t = x_{1} + x_{2} + x_{3} + … First A sees clue x_{1}, and reports r_{1}, his estimate of truth t. Next B sees report r_{1}, and also clue x_{2}, and then reports r_{2}, his estimate of truth t. A now sees report r_{2}, a new clue x_{3} and reports r_{3}. The two of them could go back and forth like this for a long time.

If A and B were perfect Bayesians (and if each x_{i} were independently and normally distributed with zero mean and a known variance V_{i}), then we would have r_{i} = x_{i} + r_{i-1}. When combining their last two expressed opinions, each agent puts *zero *weight on his own last report, and just adds his new clue to the other agent’s last report!

OK, but what about imperfect agents? I assume:

- when making report r
_{i}, each agent can only remember his last clue x_{i}and the two most recent reports, what he last heard r_{i-1}and what he last said r_{i-2}. - while such agents would like to compute the perfect Bayesian estimate b
_{i}= E[t|x_{i},r_{i-1},r_{i-2}], they can only produce an approximation a_{i}= b_{i}+ e_{i}, (where each e_{i}is independently and normally distributed with zero mean and a known variance E_{i}), - they know they make such mistakes, and so adjust, producing a calibrated estimate r
_{i}= E[t|a_{i}], and - everyone knows and agrees on the new info per round V
_{i}and thinking noise per round E_{i}.

Thus we should have

A reports r

_{1}= E[t| e_{1}+ E[t|x_{1}] ]

B reports r_{2}= E[t| e_{2}+ E[t|x_{2},r_{1}] ]

A reports r_{3}= E[t| e_{3}+ E[t|x_{3},r_{2},r_{1}] ]

B reports r_{4}= E[t| e_{4}+ E[t|x_{4},r_{3},r_{2}] ]

A reports r_{5}= E[t| e_{5}+ E[t|x_{5},r_{4},r_{3}] ] and so on.

These reports turn out to be linear, of the form:

r

_{i}= (weight on last clue)*x_{i}+ (weight on other guy)*r_{i-1}+ (weight on self)*r_{i-2}+ noise

I’ve made a spreadsheet calculating these weights. Here are weights after hearing ten reports, for different combinations of info and noise (assumed same across rounds and agents):

New info | 1 | 1 | 2 | 4 | 1 | 1 |

Think noise | 0 | 1 | 1 | 1 | 2 | 4 |

weight on self | 0 | 0.39 | 0.3 | 0.19 | 0.4 | 0.25 |

weight on other | 1 | 0.57 | 0.7 | 0.81 | 0.44 | 0.25 |

This table considers cases where the agents differ:

A new info | 1 | 1 | 1 | 4 | 4 | 1 | 4 | 1 |

A think noise | 0 | 0 | 4 | 1 | 1 | 1 | 1 | 1 |

B new info | 1 | 0 | 4 | 4 | 1 | 1 | 1 | 4 |

B think noise | 1 | 1 | 4 | 4 | 4 | 4 | 1 | 4 |

A weight on A | 0.5 | 1 | 0.33 | 0.48 | 0.76 | 0.67 | 0.45 | 0.47 |

A weight on B | 0.53 | 0 | 0.59 | 0.54 | 0.24 | 0.32 | 0.54 | 0.57 |

B weight on B | 0 | 0 | 0.56 | 0.14 | 0.12 | 0.17 | 0.18 | 0.19 |

B weight on A | 0.94 | 0.92 | 0.33 | 0.81 | 0.81 | 0.51 | 0.82 | 0.72 |

I see lots of interesting patterns, but I’ll let commentors point them out. ðŸ™‚

**GD Star Rating**

*loading...*

Pingback: Overcoming Bias : Prefer Contrarian Questions, Vs Answers()