How much do we really know about why we do what we do? We are usually quite ready to explain the reasons for our actions in some detail, but on closer examination such explanations often seem to be rationalizations. So how can we tell which of our explanations to believe? If we are not willing to take people at their words, how can we learn what really drives their actions?
I've read a decent amount of the stuff at his own blog and his comments above don't seem out of the usual.
Err, umm, I think agnostic was joking.
Stickks 85% success rate, is that based on self-reported success from participants? Or is it audited in some way?
Because you gotta figure there's going to be a strong bias if self-reported.
> In the US the [Stickk] scheme is said to be achieving success rates of up to 85%.
That means you only have a 3/20 probability of a financial penalty; contrasted with a 17/20 probability of actually losing weight; unlike other programs which charge you equally whether you succeed or fail.
I want to get into slightly better shape by running twice per week and doing squats and a few other exercises once per week - let's say my desire for that, as a weighted combination of peak intensity of desire and the integral of the intensity of my desire-moments over time, is 7. I also want to read blogs for an extra three hours per week - my desire for that, by the same metric, is 3. But I am somewhat undisciplined, especially so when it comes to fitness. So as I finish working and am deciding whether to go for a run or read blogs, the voice in my head that says "go for a run" lacks bargaining power. The voice that says "read blogs" wins because it would provide a reward now, while running would provide a reward later. I don't discount future benefits according to anything like an exponential curve, because I don't have a habit that would provide a rationale for aggregating the future reward of running consistently over many weeks, which, if I had it, might overcome the short-term desire to read blogs and allow me to do what I want even more than reading blogs for an extra three hours, which is to exercise.
So I think the diet example is proof that people often don't have very developed discipline, or they have too many distractions that confound their attempts at self-discipline and they know it. Why try if you know you'll fail?
I imagine the "dieting is not about losing weight" post.
What about the affect of negative financial incentives, like losing medical insurance when you exceed a BMI limit?
In the US the [Stickk] scheme is said to be achieving success rates of up to 85%.
This is unscientific poll of highly self-selected group and means essentially nothing. Every diet fad will give similar numbers. I'll wait for a real randomized trial.
Why don't people use Stickk to lose weight? Robin offers two possible answers. One, people don't want to lose weight that badly. Two, people want to lose weight but have a stronger preference not to do something new and socially unusual. I would like to point out that there are other possibilities, closer to Robin's second than his first suggestion. For example, people want to lose weight, but they also want to gain a sense of freedom and control which is undermined by entering into a contract with strict monitoring.
To a greater or lesser extent this may be implicit when people say, "I want to lose weight."
Robin, your example of the route-planning tool is an example of a rule-of-thumb". We use rules-of-thumb in just about every decision we make. Signing up for stickk.com is not an example of a rule-of-thumb. In this case, you are simply changing the punishment for failure (or alternatively, the incentive for success). It does not give you a roadmap for how to lose weight.
Using rules-of-thumb effectively "automates" our decision-making. Perhaps, you just distrust how we select the rules we use. Shouldn't we let people be free to choose those rules that work for them, rather than trying to artificially "automate" them?
People who say they want to lose weight are almost always sincere; the questions, though, are *how strongly* do they want this at any given time, and *how consistent* is the strength of their desire over time? Surely they also want to enjoy eating as much as they like of foods they like, so they always face a trade-off. Someone may never act so as to lose weight even though he sincerely desires this, because he *more strongly* desires eating. And the *strengths* of these opposing desires may fluctuate over time. At time t the person really does more strongly desire weight-loss, but soon, at time t', the desire to eat becomes the stronger. Perhaps your point is that when someone says he "wants to lose weight" he is suggesting that he *consistently* wants this *more* than he wants the pleasures of eating unrestrainedly, and *this* suggestion is likely to be false. That seems correct. Indeed, if he consistently had a stronger desire for weight-loss, he would lose weight without needing stickk.com.
You've posted about hypocrisy a lot recently and in the past, but you've focused mostly on unmasking examples of hypocrisy.
First, this seems to me a bit old hat. Isn't the whole reason economics looks at revealed preferences instead of stated preferences because people lie to other and themselves?
Second, a more interesting and unexplored question is "What are the welfare implications of hypocrisy?"
For this I see two main lines of inquiry: first, what are the descriptive effects of individual agent hypocrisy when aggregated over many interacting agents, and second, given hypocrisy what is the appropriate way of specifying a welfare function.
In short, let's stop the jeremiads and focus on doing economics.
people prefer to have the illusion that offense and defense are balanced b/c they don't understand game design and that is the simplest form of balance they can intuitively understand.
in cases where the work involved would be disproportionate to the utility derived people use proxies for their values. so people will for example read NYT or a blog that they know they already agree to a certain extent with.
If costs go up people will put up with less accurate proxies, if costs go down they will look for more accurate proxies. The internet has drastically lowered the cost of finding a proxy that more accurately resembles your values.
I expect that if it were available, people would love to have an AI that models their values and does their filtering for them.
Perhaps you should engage more with the many people who oppose your argument, rather than attach to the rare arguments that support your position.
I use leechblock a firefox add on that blocks certain sites during work hours. You can bypass it when you absolutely need to. But it does reduce the amount of time wasted on certain sites. What other similar tools do people use?Anti virus programs seem similar. People open documents they know to be dubious as they know they have AV that should stop the malware. This is a case of overly trusting the algorithmic safety decisions. In a similar way people put to much faith in safety devices they regularly see http://www.damninteresting....
How is the tool of a safety belt that causes someone to drive more dangerously unmask their desires?