Bryan Caplan argues that economists mostly agree with one another, compared to the general public, and reports results from the Survey of Americans and Economists on the Economy (SAEE):
The leading correlates of economists’ disagreement are political-ideology and, to a lesser extent, party affiliation. Liberal Democratic and conservative Republican economists disagree in expected ways about taxes, regulation, excessive profits and executive pay, and some employment-related issues. Conservative economists are also markedly more optimistic about the country’s economic future. Note, however, that there is little evidence of an ideological divide over the economy’s past or present performance. Economists
across the political spectrum can largely agree about the path of inequality, real income, and real wages over the past two decades.
I don’t find agreement about the past very comforting: the point of economic advice is to deliver good consequences in the future. However, I would point out that disagreements about predictions are an opportunity for retrospective assessment. Indeed, when Bryan’s paper was published, in 2002, the 5 year timeline of the predictions had already come and gone. But there’s nothing stopping us from checking now. [Note, I prepared this post up until this point with the intention of posting it before peeking at the data.] Results below the fold.
Continue reading "The SAEE: who was right?" »
GD Star Rating
Let's say you have been promoting some view (on some complex or fraught topic – e.g. politics, religion; or any "cause" or "-ism") for some time. When somebody criticizes this view, you spring to its defense. You find that you can easily refute most objections, and this increases your confidence. The view might originally have represented your best understanding of the topic. Subsequently you have gained more evidence, experience, and insight; yet the original view is never seriously reconsidered. You tell yourself that you remain objective and open-minded, but in fact your brain has stopped looking and listening for alternatives.
Here is a debiasing technique one might try: writing a hypothetical apostasy. Remind yourself before you start that unless you later choose to do so, you will never have to show this text to anyone.
Imagine, if you will, that the world's destruction is at stake and the only way to save it is for you to write a one-pager that convinces a jury that your old cherished view is mistaken or at least seriously incomplete. The more inadequate the jury thinks your old cherished view is, the greater the chances that the world is saved. The catch is that the jury consists of earlier stages of yourself (such as yourself such as you were one year ago). Moreover, the jury believes that you have been bribed to write your apostasy; so any assurances of the form "trust me, I am older and know better" will be ineffective. Your only hope of saving the world is by writing an apostasy that will make the jury recognize how flawed/partial/shallow/juvenile/crude/irresponsible/incomplete and generally inadequate your old cherished view is.
(If anybody tries this, feel free to comment below on whether you found the exersise fruitful or not – but no need to state which specific view you were considering or how it changed.)
GD Star Rating
New data question the claim that people tend to overestimate their abilities:
A large body of literature purports to find that people are generally overconfident. In particular, a better-than-average effect in which a majority of people claim to be superior to the average person has been noted for a wide range of skills, from driving, to spoken expression, to the ability to get along with others, to test taking on simple tests. The literature generally accepts that this better-than-average effect is indicative of inflated self- assessments. However, [we] recently … show that the better-than-average data … does not indicate … people have made some kind of error in their self-evaluations. Because of this reason, almost none of the existing experimental literature on relative overconfidence can actually claim to have found overconfidence. … In this paper, we report on an experiment designed to provide a proper test of overconfidence. … As in much previous experimental work, we find a better-than-average effect among our subjects. … We find evidence that subjects are uncertain of their own types. Our experiment can be viewed as a test of the null hypothesis that people are behaving rationally (and are not overconfident). We cannot reject that hypothesis.
GD Star Rating
Not too long ago, the well-known economist Robert Hall presented this paper (co-authored with Susan E. Woodward) at my place of work. Here is the abstract:
In the standard venture capital contract, entrepreneurs have a large fraction of equity ownership in the companies they found and are paid a sub-market salary by the investors who provide the money to develop the idea. The big rewards come only to those whose companies go public or are acquired on favorable terms, forcing entrepreneurs to bear a substantial burden of idiosyncratic risk. We study this burden in the case of high-tech companies funded by venture capital. Over the past 20 years, the typical venture-backed entrepreneur earned an average of $4.4 million from companies that succeeded in attracting venture funding. Entrepreneurs with a coefficient of relative risk aversion of two and with less than $0.7 million would be better off in a salaried position than in a startup, despite the prospect of an average personal payoff of $4.4 million and the possibility of payoffs over $1 billion. We conclude that startups attract entrepreneurs with lower risk aversion, higher initial assets, preferences for entrepreneurship over employment, and optimistic beliefs about the payoffs from their products.
During the seminar it occurred to me that these results, assuming they are correct, are evidence of an absence of overconfidence, at least among the kinds of people who leave good jobs to form high-tech startups. The reason is that if potential entrepreneurs were massively overconfident, one would expect to see lots of entry of startups based on weak ideas, which would lead to an expected payoff so low that forming a startup would be a losing proposition for the potential entrepreneur unless he/she started out extremely wealthy and/or had very low risk-aversion. But what the authors actually find is that forming a startup with an average-quality idea* is a break-even proposition for a potential entrepreneur with quite modest wealth and with a more-or-less standard degree of risk-aversion.
After the talk, I asked Professor Hall if he agreed with this interpretation (he seemed to), and if he would object to my posting about it on OB (he didn't). But I will make him aware of this post, and invite him to comment if he would like, and correct any mistakes that I might have made.
*The authors have no way to distinguish the quality of an idea, so there is an implicit assumption that the marginal quality of the idea is equal to the average quality of all ideas that actually get implemented.
GD Star Rating
I was recently having a conversation with some friends on the topic of hour-by-hour productivity and willpower maintenance – something I’ve struggled with my whole life.
I can avoid running away from a hard problem the first time I see it (perseverance on a timescale of seconds), and I can stick to the same problem for years; but to keep working on a timescale of hours is a constant battle for me. It goes without saying that I’ve already read reams and reams of advice; and the most help I got from it was realizing that a sizable fraction other creative professionals had the same problem, and couldn’t beat it either, no matter how reasonable all the advice sounds.
"What do you do when you can’t work?" my friends asked me. (Conversation probably not accurate, this is a very loose gist.)
And I replied that I usually browse random websites, or watch a short video.
"Well," they said, "if you know you can’t work for a while, you should watch a movie or something."
"Unfortunately," I replied, "I have to do something whose time comes in short units, like browsing the Web or watching short videos, because I might become able to work again at any time, and I can’t predict when -"
And then I stopped, because I’d just had a revelation.
Continue reading "Chaotic Inversion" »
GD Star Rating
While we tend to be optimistic about our abilities, we are pessimistic about our luck:
We analyze the answers of a sample of 1,540 individuals to the following question "Imagine that a coin will be flipped 10 times. Each time, if heads, you win 10C. How many times do you think that you will win?" The average answer is surprisingly about 3.9 which is below the average 5, and we interpret this as a pessimistic bias. We find that women are more "pessimistic" than men, as are old people relative to young.
Added: Benja Fallenstein notes "if there is no [personal] gain associated to the coin tossing, the average [guess] is 4.9, and 90% answer 5.
GD Star Rating
We are more overconfident on tasks we don’t actually expect to perform, and when we don’t expect to have to explain our evaluation to others. On expecting to perform:
Participants made predictions about performance on tasks that they did or did not expect to complete. In three experiments, participants in task-unexpected conditions were unrealistically optimistic: They overestimated how well they would perform, often by a large margin, and their predictions were not correlated with their performance. By contrast, participants assigned to task-expected conditions made predictions that were not only less optimistic but strikingly accurate. Consistent with predictions from construal level theory, data from a fourth experiment suggest that it is the uncertainty associated with hypothetical tasks, and not a lack of cognitive processing, that frees people to make optimistic prediction errors. Unrealistic optimism, when it occurs, may be truly unrealistic; however, it may be less ubiquitous than has been previously suggested.
On expecting to explain:
Accountability … [is] the expectation to explain, justify, and defend one’s self-evaluations (grades on an essay) to another person ("audience"). Experiment 1 showed that accountability curtails self-enhancement. Experiment 2 ruled out audience concreteness and status as explanations for this effect. Experiment 3 demonstrated that accountability-induced self-enhancement reduction is due to identifiability. Experiment 4 documented that identifiability decreases self-enhancement because of evaluation expectancy and an accompanying focus on one’s weaknesses.
It is almost as if we at some level realize that our overconfidence is unrealistic.
GD Star Rating
"Bah, everyone wants to be the gatekeeper. What we NEED are AIs."
Some of you have expressed the opinion that the AI-Box Experiment doesn’t seem so impossible after all. That’s the spirit! Some of you even think you know how I did it.
There are folks aplenty who want to try being the Gatekeeper. You can even find people who sincerely believe that not even a transhuman AI could persuade them to let it out of the box, previous experiments notwithstanding. But finding anyone to play the AI – let alone anyone who thinks they can play the AI and win – is much harder.
Me, I’m out of the AI game, unless Larry Page wants to try it for a million dollars or something.
But if there’s anyone out there who thinks they’ve got what it takes to be the AI, leave a comment. Likewise anyone who wants to play the Gatekeeper.
Continue reading "AIs and Gatekeepers Unite!" »
GD Star Rating
Followup to: Make An Extraordinary Effort, On Doing the Impossible, Beyond the Reach of God
The virtue of tsuyoku naritai, "I want to become stronger", is to always keep improving – to do better than your previous failures, not just humbly confess them.
Yet there is a level higher than tsuyoku naritai. This is the virtue of isshokenmei, "make a desperate effort". All-out, as if your own life were at stake. "In important matters, a ‘strong’ effort usually only results in mediocre results."
And there is a level higher than isshokenmei. This is the virtue I called "make an extraordinary effort". To try in ways other than what you have been trained to do, even if it means doing something different from what others are doing, and leaving your comfort zone. Even taking on the very real risk that attends going outside the System.
But what if even an extraordinary effort will not be enough, because the problem is impossible?
I have already written somewhat on this subject, in On Doing the Impossible. My younger self used to whine about this a lot: "You can’t develop a precise theory of intelligence the way that there are precise theories of physics. It’s impossible! You can’t prove an AI correct. It’s impossible! No human being can comprehend the nature of morality – it’s impossible! No human being can comprehend the mystery of subjective experience! It’s impossible!"
And I know exactly what message I wish I could send back in time to my younger self:
Shut up and do the impossible!
Continue reading "Shut up and do the impossible!" »
GD Star Rating