Via Tyler Cowen, we hear Robert Fogel did not trust his future selves:
When I graduated from college, I had two job offers. One was from my father, to join him in the meat-packing business. That would have been quite lucrative. The other was as an activist for a left-wing youth organization. I chose the latter and worked as an activist from 1948 to 1956. At the time I was making that decision, my father told me: "If you really believe in that cause, come work with me. You will make a much higher wage and you could give your extra income to hire several people instead of just yourself." I thought, well, that makes some sense. But I was convinced that this was a way to get me to change my views or at least lessen my commitment to an ideological cause that I found very important. Yes, the first year, I might give all of my extra money to the movement, but every year I would probably give less, and finally reach the point when I was giving nothing at all. I feared I would be co-opted. I thought this was my father’s way of indoctrinating me.
When I spent a few weeks at Oxford last summer, Toby Ord similarly said he wanted to commit his future selves to donating at least ten percent of income to third world charity; he did not trust his future selves to make that choice for themselves.
These paternalism examples are striking, because paternalism is usually justified as a response to a combination of ignorance and irrationality, but Robert and Toby should expect their futures selves to be just as smart and rational, and even better informed than they. How can they reasonably expect their future selves to be so much more biased that force is appropriate to constrain them?
Added: Tody Ord elaborates in the comments.
Robin:I certainly see that there are biases that are more likely to affect the young than the old, but don't see any evidence that I am particularly suffering from them here. Obviously we can't be sure that we have eliminated all relevant biases in making a decision, but paralysing ourselves by refusing to make decisions in all such cases is clearly the worst of all ways forward. In this case, I'm not really claiming that my relevant beliefs are more likely to be true than those of my future self, but that he is more likely to have an immoral (or less moral) preference on this matter. I would therefore be happy to coerce my future self in this way. There are related issues which are closer to your original concern, such as if I was doing this because I thought not that my future self would act in a way that he sees as less moral, but that he would actually believe that to be moral. I think there is some chance of this, as we are biased to believe moral claims which help us out and don't hinder us. Such a conflict seems closer to the type you were originally writing about here and the weighing of young and old biases would seem more important. However I am mainly hedging against preference change rather than belief change.
Note also that I'm not making a contract that would completely bind me. I am instead making a pledge that I would feel bad about breaking for poor reasons and other people would look down on me breaking for poor reasons. There would also be poor externalities if I broke it for poor reasons (it would do less to inspire or motivate others). If something unforseen happened, such as my needing to pay a year's salary to avoid death, then obviously I would do so, as this would allow me to do more good in the long run. If I were binding myself such that I had to die in such unforeseen cases, then it would be much more open to claims that I was overconfident. I'm happy to make a pledge like mine that would only be worthwhile breaking for very good reasons.
Robin: Overconfidence is a bias, but pursuit of glory is a preference.