Future Self Paternalism

Via Tyler Cowen, we hear Robert Fogel did not trust his future selves:

When I graduated from college, I had two job offers.  One was from my father, to join him in the meat-packing business.  That would have been quite lucrative. The other was as an activist for a left-wing youth organization.  I chose the latter and worked as an activist from 1948 to 1956.  At the time I was making that decision, my father told me: "If you really believe in that cause, come work with me.  You will make a much higher wage and you could give your extra income to hire several people instead of just yourself."  I thought, well, that makes some sense.  But I was convinced that this was a way to get me to change my views or at least lessen my commitment to an ideological cause that I found very important.  Yes, the first year, I might give all of my extra money to the movement, but every year I would probably give less, and finally reach the point when I was giving nothing at all.  I feared I would be co-opted. I thought this was my father’s way of indoctrinating me.

When I spent a few weeks at Oxford last summer, Toby Ord similarly said he wanted to commit his future selves to donating at least ten percent of income to third world charity; he did not trust his future selves to make that choice for themselves. 

These paternalism examples are striking, because paternalism is usually justified as a response to a combination of ignorance and irrationality, but Robert and Toby should expect their futures selves to be just as smart and rational, and even better informed than they.  How can they reasonably expect their future selves to be so much more biased that force is appropriate to constrain them?

Added: Tody Ord elaborates in the comments.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • sa

    this is an awesome question. estate lawyers deal with this time. a source of much acrimony in family. isn’t the european welfare state a attempt by the voters to constrain the effective lattitude of future unborn voters wrt healthcare and pensions.

  • http://homepage.mac.com/redbird/ Gordon Worley

    I suspect this is a bias of the young about the old. Consider, for example, my own father. As a college student he marched in protests and believed in liberal politics. Thirty years later, he’s a Republican. There appears to be a strong tend to conservatization as we age, so a young person who strongly believes they are right must conclude that conservatization is the result of bias, rather than increased knowledge, since no additional knowledge would make them change their mind (since they are already correct).

    Thus is the peril of forming absolute convictions.

  • http://jewishatheist.blogspot.com JewishAtheist

    How can they reasonably expect their future selves to be so much more biased that force is appropriate to constrain them?

    It’s much easier to make a one-time decision than to continue choosing the “right” thing. Compare throwing out a box of cookies to letting them sit on your desk all day without eating them.

  • Perry E. Metzger

    I think this is simply a rational way of dealing with the fact that you know you are partially irrational.

    For example, take the case of the person who puts an alarm clock on a high shelf far from the bed. He is pre-committing to wake up, and he’s assuming that when the time comes to do that, his future self will be less rational than his current self and will attempt to shut off the alarm before he can evaluate the consequences of doing that. Therefore, his current self is making it hard to turn off the alarm in an effort to circumvent his future self.

    Or, as someone else has mentioned, consider the person on a diet who chooses to avoid being around tempting foods or to throw them away. Such a person is allowing their current, rational self to pre-commit in such a way as to avoid their future irrational self undoing a reasonable decision.

    People are not always rational. People also understand that they are not always rational. Strategies like this are a way of allowing you to make a decision while you are behaving more rationally to avoid a future behavior when you know that you will be less rational.

  • rcriii

    But how does your current self know it is more rational than the future self? Or as Robin once asked:

    “How hard should you try to agree with [your younger self]?”

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Jewish and Perry, our standard understanding of time inconsistency says that we impulsively prefer immediate gratification, and so commitments made at a previous date can offer superior decisions for the long run. But you don’t have to get very far before the immediate moment to gain this advantage. I can move the cookies away from my desk in the morning so I do not munch on them all day; I don’t need to make that decision twenty years ahead of time to gain this benefit. The idea that one’s forty year old self is “impulsive” even if he makes decisions for his next five years seems a bit of a stretch.

  • Carl Shulman

    Future self paternalism is best justified when you fear, not that you will conclude your current ethical stances are erroneous, but that your finite supplies of willpower will be more heavily taxed in the future:
    http://www.psy.fsu.edu/~baumeistertice/muravenbaumeister2000.pdf
    Perry’s example seems to fall into this category, as even while devouring a cake, one may still wish to follow the diet over time.

    It seems that the optimal strategy is to control for greater future stresses on one’s willpower (tempting new purchases when one has the ability to make them, family demands, etc) while benefiting from greater analytic rationality and knowledge by committing to give, but ‘to the best available cause’ rather than a specific one. Private foundations and donor-advised funds are useful vehicles for this. Transferring funds to rational altruists who will respond to new information as effectively as you would can have a similar effect (although the last option will deliver higher returns, if the real return on fundraising expenditures is greater than the return on one’s private investment portfolio or the marginal impact of one’s funds declines rapidly with time).

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Carl the paper you point to suggests a limited supply of self-control over the space of an hour, not over the space of many decades.

  • Carl Shulman

    One’s future self needn’t be impulsive for such paternalism to be justified. If one plans to marry and have children, or otherwise take on new obligations that will compel one to spend a substantial proportion of one’s available resources, whatever one’s level of resources may be, then precommitting efforts before taking on those obligations will reduce conflict and the mental effort required to maintain an allocation.

    A ‘ratchet effect’ on expenditures is also relevant. It is easy for one’s minimum standard of living to rise over the years in response to myriad temptations (moving from enjoying grad student life to the ‘golden handcuffs’ lifestyle of many Wall Street types), but painful to substantially reduce it, so limiting one’s supply of discretionary income can reduce the psychological costs of future giving.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Carl, economists have many formal models that make clear the benefits of commitment to deal with impulsivity. I have not seen any such models making clear the benefits of commitment to deal with the issues you mention, and I have doubts about whether such models could be constructed for rational agents.

  • Carl Shulman

    Robin,

    Adjusting one’s affairs to deal with informal taxation by social relations can certainly be optimal for a rational agent. If the demands of relations increase with disposable income, and it is psychologically costly to conflict with them, then avoiding that conflict by irrevocably ensuring money will be used for altruistic efforts seems clearly beneficial. The GMU lunch crowd seems to discuss this reasonably often:
    http://www.marginalrevolution.com/marginalrevolution/2006/03/taxing_families.html

    The basic assumptions for a ratchet effect:
    1. Life presents a steady stream of temptations, indulgences that one does not wish to pursue in conditions of calm reflection, but that require willpower to resist.
    2. The greater the proportion of one’s discretionary income succumbing to an individual temptation will cost, the less willpower will be required to resist, or the less likely one is to succumb.
    3. Succumbing to temptation can be habit-forming, such that resisting a habit already acquired will require more willpower than resisting the creation of a new habit.

    Given these assumptions, constraining one’s discretionary income from time T to T+10 should reduce the number of unwanted, costly habits one acquires, and reduce the psychological costs of conforming to your ethical principles at T+11, even when the constraint lapses.

  • albatross

    If I commit now to donate 10% of my income for life to some charity, the me that exists right now gets the full benefit (in terms of feeling good) of the decision, while the costs fall on mes far in the future. (So I guess there’s an externality involved, and if the future me were here, he’d want to negotiate about this, as well as about that double quarter pounder I’m having for lunch.)

  • http://profile.typekey.com/halfinney/ Hal Finney

    Reminds me of the widely quoted and misquoted saying:

    If a man is not a socialist by the time he is 20, he has no heart.
    If he is not a conservative by the time he is 40, he has no brain.

  • http://profile.typekey.com/bayesian/ Peter McCluskey

    I’m puzzled by the talk of future selves being biased here. I think albatross is on the right track, but isn’t being as clear as I would like. If I regard the “future me” as a completely different person, he and I can rationally and selfishly agree that it’s in my interest for him to donate the money and not in his interest to do so. If there’s any bias, it’s more likely to be committed by third parties who think I’m being altruistic by spending money that they mistakenly think of as mine, when I think of it as belonging to someone who is best described as “partly me, partly a different person”.

  • Douglas Knight

    I find the Robert Fogel example much more interesting than the Toby Ord example. Perhaps this is just because I do not know the Toby Ord example. I (and most of the commenters, I think; certainly Carl Shulman) assume that he expects his future self to have similar views to his current self, but to fail for some other reason.

    But Robert Fogel is pretty explicit, at least in the word “indoctrinate,” that he think different activities will have different effects on his beliefs and/or goals. It would be interesting to know his model in more detail.

  • JMG3Y

    Doesn’t this just reflect a tacit understanding that the subconscious soul of the human brain (values, emotions) changes almost irreversibly in the context of life experiences, that these changes are beyond those expected due to aging, disease or accident, that once certain thresholds are crossed long enough going back is very difficult despite prior best intentions? Emotions drive motivation; thus changed emotional response results in different motivations. The corruption of power, the experience of survivorship, post-traumatic stress syndrome and, probably, parenthood. The subconscious is changed and humans don’t have a reset button or a restore function to return to a prior, pristine state.

  • http://profile.typekey.com/halfinney/ Hal Finney

    The bias here seems to attach to the future self. How can one hold his own beliefs in good faith if he sees them as having been arrived at arbitrarily? If he thinks he acquired his beliefs merely as accidents of fate, because he happened to follow one life path rather than another, how can he view them as being of any serious import? If Fogel sees his present-day liberal views as the consequence of that long-ago decision not to work for his father, which would have turned him into a conservative, he must accept that he has no real grounds for his beliefs.

    I don’t see how people can believe things while also seeing that they have no real grounds for their belief other than accident. That seems like too serious and obvious a bias to be overlooked or suppressed.

  • Stuart Armstrong

    We are all paternalistic towards our future selves – by choosing our careers, the places we move to, the people we hang out with, the mental habits we develop. Even saying I Want To Become Stronger is constraining our future – possibly to a less happy path. Even doing nothing is big constraint.

    So if we see our future selves as a semi-different person, we are in the strange position that we have to be paternalistic towards him/her. The only other case where “forced paternalism” is prevalent is with the parent of a child – and, similarly to liberal child-rearing, some commentators here seem to feel that the best you can do in this case is just act to maximise your future self’s freedom and ability to overcome bias; in effect, trust your future self. But is that analogy the correct one?

    The question may be: what do you wish your past self had done for you? Is it reasonable to assume that your future self would like similar things? If so, why aren’t you doing it for him – especially if it costs you nothing?
    I’m quite attracted to the idea of my past self compelling me to give 10% of my money to charity. It doesn’t constrain me politically or emotionally, and would make me a better person today. My personality is pretty fixed by now, so it would probably appeal to my future self as well. Ergo, by doing so I am providing a valuable service to my future self. Just as I am providing a valuable service to him by trying to overcome my bias – or making money for him.

    And as long as my commitment to charity has certain safeguards in it so that it’s not a complete straitjacket (since my knowledge of my future self is somewhat imperfect), I see no problems with it.

  • Stuart Armstrong

    Robert Vogel’s example is more of a problem – he’s acting to constrain his future self’s political views. He seems to believe that self-serving bias is an intensely strong bias, so he should choose his career path in consequence (this is certainly true in some careers – if you’re offered a choice between working for one of two political parties, then it’s nearly certain that your biases will be affected by the one you choose to work for. It’s less certain that this is the case for a meat packing business, but it might be the case for Fogel).

    Now, Hal Finney points out

    If Fogel sees his present-day liberal views as the consequence of that long-ago decision not to work for his father, which would have turned him into a conservative, he must accept that he has no real grounds for his beliefs.

    Indeed. According to his own model of how careers affect beliefs, Fogel has no stronger grounds for his political beliefs in 1956 than he did in 1948 (I don’t know what happened since). In fact, he probably has less ground.

    But if the 1948 Fogel believed that:
    1) Self serving-bias in a career overwhelms other considerations,
    2) His liberal beliefs were more correct than conservative ones,
    3) The loss of liberty for his future self will not be crippling for that self,
    then his decision was the right one according to his lights.

    But was it correct ‘objectively’? It very probably wasn’t. It’s hard to hold all three statements in an unbiased way – the more unbiased evidence Foley finds for 2) and 3), the more that undermines 1).

    There does seem to be a small window where Foley’s decision is the right one – if there is evidence that a job as meat packer will constrain his future beliefs more than a job as political activist (unlikely in the short run, possibly true in the long run). But that window is very small.

    Lastly, Foley might have other priorities than merely improving or debiasing himself; he could feel that his future self has some claim to his help, but so do other issues and other people. In that case the truth of 3) is not important, and 1) and 2) can be balanced more easily.

  • anon

    @ should expect their futures selves to be just as smart and rational, and even better informed than they.

    Judging from the causes they supported, they were afraid their future selves would be at least as rational and better informed. They look like they were trying to use commitment to head off growing up.

  • James Wetterau

    One consistent explanation for the view held by these men would be a belief that having more money somehow undermines rationality. (If they believed that money acts like a drug that would actually sap their powers of reason, they might both want money as an instrument, but fear how possessing it would degrade or warp their minds or wills. This is to me a puritanical, alien way of thinking, but I still think people might believe it.

  • ChrisA

    The key to understanding this approach is to start with the premise that the present day self has reached a perfect conclusion already, therefore any future self can only decay from this perfection (or at best concur). Isn’t this the defining feature of paternalism? A paternalist believes they have the correct answer and therefore, to prevent error, otherwise free agents must be constrained to the selected answer.

    Robin may have trouble believing people can be so arrogant, but I see it all the time.

  • Stuart Armstrong

    The key to understanding this approach is to start with the premise that the present day self has reached a perfect conclusion already, therefore any future self can only decay from this perfection (or at best concur).
    I think the first example I gave was one way you can justify (soft) future-self paternalism while thinking that your future self will be better than you.

    But the more I read of the posts here, the more I’m convinced that paternalism is a bad model for what is going on here. The best a paternalist can do, generally, is do the minimum: give his target the most options he can, and get out of the way. But this is your life! You can’t live you whole life for your hypothetical future self. Your future self is not a free agent; he is the outcome of the decisions you have to make today.

    I think the term paternalism just clouds the issue. No-one has mentioned smoking yet – but that matches Toby Orb’s case quite closely, in that starting smoking can give you a warm glow today (of pleasure, of satisfaction), and constrains the liberty of your future self. That is an anti-paternalistic argument against smoking – if it seems strange to you, then seeing Toby Orb’s charity argument in terms of paternalism must be strange as well.

    And what about starting a mortgage? Certain pleasure today, uncertain future-self pleasure, loss of future-self liberty. Paternalism?

    NB: I still think you should trust your future self more than some people do. I just think that seeing this in terms of paternalism distorts the picture.

  • James Wetterau

    Stuart Armstrong suggests:
    “I think the term paternalism just clouds the issue. No-one has mentioned smoking yet – but that matches Toby Orb’s case quite closely, in that starting smoking can give you a warm glow today (of pleasure, of satisfaction), and constrains the liberty of your future self. That is an anti-paternalistic argument against smoking – if it seems strange to you, then seeing Toby Orb’s charity argument in terms of paternalism must be strange as well.”

    I think this gets to a good point. There are actions we take now or don’t take now because we fear that they might limit our rationality later. Smoking is famously addictive, and if addiction is seen as contrary to reason we might well fear that a little smoking today will trap us in an irrational habit for years to come. This relates, I think, to my point that some people might fear that money will warp their ability to reason correctly. I agree that paternalism is not the right way to think about this type of fear.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    James, we might call the motive you describe “meta-paternalism”, limiting choices not because irrationality makes their choices are bad, but because such choices will lead to reduced rationality which will then make other choices bad. Of course meta-paternalism is just paternalism at the meta level. After all, you could just give them your advice about how best to preserve their rationality.

  • albatross

    There are two different things going on here:

    a. We may think we know what’s right and want to constrain our future selves. That is, we think we know the right values now, and that our future selves will not.

    b. We may think we can act now in ways that will benefit our future selves, even at the cost of also binding them to some extent. That is, we think we share values with our future selves.

    Most examples I can think of for constraining future behavior fall into (b)–getting married, buying a house, enrolling in school, all involve a commitment that constrains future choices, but in a direction you expect your future self to appreciate.

    The best examples of (a) I can think of involve addiction. I don’t want to start smoking, partly because I expect that the future me will find it both necessary and very difficult to stop. I wouldn’t normally expect the future mes to be less moral, or to have radically different beliefs, than I do. But addiction is an example. Another might be a decision by a very hotheaded person, or one subject to serious depression, not to keep firearms in the house. This involves recognizing that the local me might not hold quite the same values as the global me over time. (Similarly, many alcoholics don’t take that first drink, married men don’t go hang out in their female co-worker’s hotel room on trips, etc. Because they figure that their local selves, in the heat of the moment, may do something their global selves will regret.)

  • James Wetterau

    Robin: re: your remarks about “meta-paternalism” — I agree with what you have to say, which leaves me wondering if you feel that perhaps this holds the answer to your initial puzzle?

    If the benefits from a rationality-reducing choice are not great enough to offset the net present predicted costs in following the “advice about how best to preserve” future rationality, then a choice against the more apparent immediate interest is the rational one. Thus, if money will undermine your future rationality, and if retaining future rationality will have a future cost that more than offsets the money received, than foregoing the money is the rational choice. Do you agree this solves the puzzle?

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    James, being an older person with more money than my younger self had, I don’t personally find it plausible that more money makes you less irrational.

  • James Wetterau

    Robin: I completely agree with you. May the filthy lucre flow our way! I was simply proposing that someone who did feel that way about money could conceivably make such a calculation logically.

  • TGGP

    Robin, are you saying that you are no more rational now than when you were young?

  • http://profile.typekey.com/tobyord/ Toby Ord

    Perhaps I could shed some light on the ‘Toby Ord’ example. The last comment from albatross was closest to the mark. I think that giving my money where it is much more efficiently spent is a greater good. For example, 17 pounds can buy me a much nicer dinner, or cure someone of blindness (I’ve checked this one, and it holds up). Now the latter of these things is clearly at least 100 times as good. If I were blind, I would certainly give up 100 dinner upgrades to be cured of blindness. Indeed, I would give this up for even just one year of sight. All the more so if I was extremely poor and had trouble even surviving without being able to see. This is rather obvious and I highly doubt my future self will disagree.

    However, it is possible that my future self will become more self interested. I might have difficulty doing what I believe is right. I have observed this phenomenon in others and even though I doubt I will become much more selfish, I think it is more likely than that I will rationally decide my dinner is better for the world than someone else’s ability to see. Thus, in aiming to promote the greater good, I should constrain my future self, just as I should constrain others in a minor way to produce an obvious global benefit. I can commit the resources of others by voting for more government aid to the poor (via taxation) but I can commit the resources of my future self much more effectively. The ‘tax is theft’ types are unlikely to agree with this argument, but I think that most people would.

    I should add that I also agree with the comments about the ratcheting effects on expectations and weakness of will that come with increasing income. Much better to make a decision like this now. Though there is a chance that my future self will be unhappy about it, the world’s poor certainly will not be.

    Oh, and one final point: I’m not commiting to give 10%, but to give all my income above an (inflation adjusting) figure of 10,000 pounds, which should free up over a million pounds (in today’s figures) for the developing world, which I could use to cure 58,000 people of blindness or perhaps something even better.

  • michael vassar

    Toby: I really appreciate such dedication, but I think that it’s probably somewhat short-sighted. It is possible to be happy on 10,000 pounds per year, but it isn’t possible to value your time very much. You are almost sure to be able to donate more money if you value your time more highly and spend it making money. I don’t know your abilities, though I would love to find out (Utilitarians and their ilk have so little social support, please e-mail me at michael.vassar at google dot com) but simply the level of dedication you are proclaiming is rare. It would be a tremendous shame to waste a potential Zell Kravinsky by encouraging them to earn a middle class salary and give it away. You should probably look at http://felicifia.com/ for more discussion.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Toby, you have considered that your future self might be biased by a weak will, but have you considered that you might be biased, such as by an excessively strong will? Perhaps you suffer from young male biases toward overconfidence and the pursuit of glory. Won’t it be the young you who is celebrated for his great devotion, while the old you does the actual paying?

  • http://profile.typekey.com/tobyord/ Toby Ord

    Michael:

    I appreciate your concerns on the motivation front. I’m actually in a rather unusual situation in that I am very motivated towards helping the poor and am probably more motivated to do that than to help myself. In fact I think I’d work harder to get $100 for the poor than $10 for me and $90 for them. My work is in ethics at Oxford and I believe I can do more good here in total than by going elsewhere and earning more to give it away. I’m open to suggestions though — if I thought it would be better, then I’d do it.

    I should also mention that I’m setting up an organization called Giving What We Can for people who want to commit to give more. The idea is that in order to join, you need to commit to give at least 10% of your income to wherever you think it will do most to fight extreme poverty. We can pool resources on investigating the efficiency of different charities (a very important topic), challenge each other to give more (fighting weakness of will) and so forth. Indeed it is something of a support network for utilitarians itself (though of course members need not be utilitarians). With enough members, we could pressure NGOs to be more efficient (as they will be more likely to get our money and that of visitors to our site), pressure the government to give a little more, provide a welcoming community for those who want to do more for the world and so forth.

    I’ve got quite a few people interested in joining (including some high profile members) and it should officially launch later this year. I expect it to raise (lifetime) pledges of between $100 million and $500 million within the first few years (obviously the amount actually raised would be lower as not everyone would be able to keep the pledge). To put this in context, my wife and I alone pledge about $6 million and people have contacted me pledging a total of $8 million together. The above targets don’t need that many members to be met, and there is a surprisingly large amount of people willing to join.

    Robin:

    I’m not especially concerned about glory. I suppose that all versions of myself could be admired for going without, although there are differences: the young self chooses to sacrifice which is often considered impressive, the older self perhaps suffers more by going without against his will and this can be more unpleasant.

    However, in reality, I’ve thought enough about ethics and giving that I honestly don’t mind much whether I or another gets a benefit, so long as it is as large a benefit as possible. Perhaps this is just me, but I imagine that if other people seriously considered such things for long enough, they would also readjust their psychologies to be less self-interested. I’m not sure. In any event, while I’m making a large financial sacrifice, I don’t think I’m sacrificing all that much happiness, for I already have a loving wife, warmth, shelter, access to all works of literature ever written, beautiful music, great friends, rich conversation etc. None of that is particularly expensive: those parts that can be bought can fairly easily be bought for 10,000 pounds a year in Oxford (a generally expensive place to live). I’ve been doing it on less than 7,000 for the past three years, leaving room to save money too. I don’t think there is much need for praise.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Toby, so you are dismissing the possibility that you are biased on the basis of the fact that you do not feel biased?

  • michael vassar

    Toby:
    You sound serious enough to be worth prolonged discussion. I’d like to meet with you while I’m in the UK at the end of this May. You can reach me to make plans (I suggest jajah.com) at the US phone number six one zero, two one three, two four eight seven. I’m pretty sure that by pooling our informational and strategic resources we can be significantly more effective than we could be otherwise.

  • http://profile.typekey.com/tobyord/ Toby Ord

    Robin:
    I’m not sure in what sense you think I might be biased. I’m sure I suffer from a number of epistemic and moral biases (as do we all unfortunately), but you seem to be getting at something in particular. I think this is a pretty rational choice given my aims.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Toby, I said: Perhaps you suffer from young male biases toward overconfidence and the pursuit of glory. I was trying to refer to biases that are especially strong in young males, as opposed to the old male you will someday become. You seem to be able to imagine biases that might especially afflict the old rich male you may someday become. Have you no concept that there could be biases especially likely to afflict the person you are today?

  • michael vassar

    Robin: Overconfidence is a bias, but pursuit of glory is a preference.

  • http://profile.typekey.com/tobyord/ Toby Ord

    Robin:
    I certainly see that there are biases that are more likely to affect the young than the old, but don’t see any evidence that I am particularly suffering from them here. Obviously we can’t be sure that we have eliminated all relevant biases in making a decision, but paralysing ourselves by refusing to make decisions in all such cases is clearly the worst of all ways forward. In this case, I’m not really claiming that my relevant beliefs are more likely to be true than those of my future self, but that he is more likely to have an immoral (or less moral) preference on this matter. I would therefore be happy to coerce my future self in this way. There are related issues which are closer to your original concern, such as if I was doing this because I thought not that my future self would act in a way that he sees as less moral, but that he would actually believe that to be moral. I think there is some chance of this, as we are biased to believe moral claims which help us out and don’t hinder us. Such a conflict seems closer to the type you were originally writing about here and the weighing of young and old biases would seem more important. However I am mainly hedging against preference change rather than belief change.

    Note also that I’m not making a contract that would completely bind me. I am instead making a pledge that I would feel bad about breaking for poor reasons and other people would look down on me breaking for poor reasons. There would also be poor externalities if I broke it for poor reasons (it would do less to inspire or motivate others). If something unforseen happened, such as my needing to pay a year’s salary to avoid death, then obviously I would do so, as this would allow me to do more good in the long run. If I were binding myself such that I had to die in such unforeseen cases, then it would be much more open to claims that I was overconfident. I’m happy to make a pledge like mine that would only be worthwhile breaking for very good reasons.