Tag Archives: Disagreement

Suspecting Truth-Hiders

Tyler against bets:

On my side of the debate I claim a long history of successful science, corporate innovation, journalism, and also commentary of many kinds, mostly not based on personal small bets, sometimes banning them, and relying on various other forms of personal stakes in ideas, and passing various market tests repeatedly. I don’t see comparable evidence on the other side of this debate, which I interpret as a preference for witnessing comeuppance for its own sake (read Robin’s framing or Alex’s repeated use of the mood-affiliated word “bullshit” to describe both scientific communication and reporting). The quest for comeuppance is a misallocation of personal resources. (more)

My translation:

Most existing social institutions tolerate lots of hypocrisy, and often don’t try to expose people who say things they don’t believe. When competing with alternatives, the disadvantages such institutions suffer from letting people believe more falsehoods are likely outweighed by other advantages. People who feel glee from seeing the comeuppance of bullshitting hypocrites don’t appreciate the advantages of hypocrisy.

Yes existing institutions deserve some deference, but surely we don’t believe our institutions are the best of all possible worlds. And surely one of the most suspicious signs that an existing institution isn’t the best possible is when it seems to discourage truth-telling, especially about itself. Yes it is possible that such squelching is all for the best, but isn’t it just as likely that some folks are trying to hide things for private, not social, gains? Isn’t this a major reason we often rightly mood-affiliate with those who gleefully expose bullshit?

For example, if you were inspecting a restaurant and they seemed to be trying to hide some things from your view, wouldn’t you suspect they were doing that for private gain, not to make the world a better place? If you were put in charge of a new organization and subordinates seemed to be trying to hide some budgets and activities from your view, wouldn’t you suspect that was also for private gain instead of to make your organization better? Same for if you were trying to rate the effectiveness of a charity or government agency, or evaluate a paper for a journal. The more that people and habits seemed to be trying to hide something and evade incentives for accuracy, the more suspicious you would rightly be that something inefficient was going on.

Now I agree that people do often avoid speaking uncomfortable truths, and coordinate to punish those who violate norms against such speaking. But we usually do this when have a decent guess of what the truth actually is that we don’t want to hear.

If if were just bad in general to encourage more accurate expressions of belief, then it seems pretty dangerous to let academics and bloggers collect status by speculating about the truth of various important things. If that is a good idea, why are more bets a bad idea? And in general, how can we judge well when to encourage accuracy and when to let the truth be hidden, from the middle of a conversation where we know lots of accuracy has been being sacrificed for unknown reasons?

GD Star Rating
loading...
Tagged as: , ,

Bets Argue

Imagine a challenge:

You claim you strongly believe X and suggest that we should as well; what supporting arguments can you offer?

Imagine this response:

I won’t offer arguments, because the arguments I might offer now would not necessarily reveal my beliefs. Even all of the arguments I have ever expressed on the subject wouldn’t reveal my beliefs on that subject. Here’s why.

I might not believe the arguments I express, and I might know of many other arguments on the subject, both positive and negative, that I have not expressed. Arguments on other topics might be relevant for this topic, and I might have changed my mind since I expressed arguments. There are so many random and local frictions that influence on which particular subjects people express which particular arguments, and you agree I should retain enough privacy to not have to express all the arguments I know. Also, if I gave arguments now I’d probably feel more locked into that belief and be less willing to change it, and we agree that would be bad.

How therefore could you possibly be so naive as to think the arguments I might now express would reveal what I believe? And that is why I offer no supporting arguments for my claim.

Wouldn’t you feel this person was being unreasonably evasive? Wouldn’t this response suggest at least that he doesn’t in fact know of good supporting arguments for this belief? After all, even if many random factors influence what arguments you express when, and even if you may know of many more arguments than you express, still typically on average the more good supporting arguments you can offer, the more good supporting arguments you know, and the better supported your belief.

This is how I feel about folks like Tyler Cowen who say they feel little obligation to make or accept offers to bet in support of beliefs they express, nor to think less of others who similarly refuse to bet on beliefs they express. (Adam Gurri links to ten posts on the subject here.)

Yes of course, due to limited options and large transaction costs most financial portfolios have only a crude relation to holder beliefs. And any one part of a portfolio can be misleading since it could be cancelled by other hidden parts. Even so, typically on average observers can reasonably infer that someone unwilling to publicly bet in support of their beliefs probably doesn’t really believe what they say as much as someone who does, and doesn’t know of as many good reasons to believe it.

It would be reasonable to point to other bets or investments and say “I’ve already made as many bets on this subject as I can handle.” It is also reasonable to say you are willing to bet if a clear verifiable claim can be worked out, but that you don’t see such a claim yet. It would further be reasonable to say that you don’t have strong beliefs on the subject, or that you aren’t interested in persuading others on it. But to just refuse to bet in general, even though you do express strong beliefs you try to persuade others to share, that does and should look bad.

Added 4July: In honor of Ashok Rao, more possible responses to the challenge:

A norm of thinking less of claims by those who offer fewer good supporting arguments is biased against people who talk slow, are shy of speaking, or have bad memory or low intelligence. Also, by discouraging false claims we’d discourage innovation, and surely we don’t want that.

GD Star Rating
loading...
Tagged as: ,

Imagine Farmer Rights

Yesterday I criticized proposals by George Dvorsky and Anders Sandberg to give rights to ems by saying that random rights are bad. That is, rights limit options, which is usually bad, so those who argue for specific rights should offer specific reasons why the rights they propose are exceptional cases where limiting options helps strategically. I illustrated this principle with the example of a diner’s bill of rights.

One possible counter argument is that these proposed em rights are not random; they tend to ensure ems can keep having stuff most of us now have and like. I agree that their proposals do fit this pattern. But the issue is whether rights are random with respect to the set of cases where strategic gains come by limiting options. Do we have reasons to think that strategic benefits tend to come from giving ems the right to preserve industry era lifestyle features?

To help us think about this, I suggest we consider whether we industry era folks would benefit had farmer era folks imposed farmer rights, i.e., rights to ensure that industry era folks could keep things most farmers had and liked. For example, imagine we today had “farmer rights” to:

  1. Work in the open with fresh air and sun.
  2. See how all  food is grown and prepared.
  3. Nights outside are usually quiet and dark.
  4. Quickly get to a mile-long all-nature walk.
  5. All one meets are folks one knows, or known by them.
  6. Easily take apart devices, to see materials, mechanisms.
  7. Authorities with clear answers on cosmology, morality.
  8. Severe punishment of heretics who contradict authorities.
  9. Prior generations quickly make room for new generations.
  10. Rule by a king of our ethnicity, with clear inheritance.
  11. Visible deference from nearby authority-declared inferiors.
  12. More?

Would our lives today be better or worse because of such rights?

Added: I expect to hear this response:

Farmer era folks were wrong about what lifestyles help humans flourish, while we industry era folks are right. This is why their rights would have been bad for us, but our rights would be good for ems.

GD Star Rating
loading...
Tagged as: , , , ,

What Do We Know That We Can’t Say?

I’ve been vacationing with family this week, and (re-) noticed a few things. When we played miniature golf, the winners were treated as if shown to be better skilled than previously thought, even though score differences were statistically insignificant. Same for arcade shooter contests. We also picked which Mexican place to eat at based on one person saying they had eaten there once and it was ok, even though given how random is who likes what when, that was unlikely to be a statistically significant difference for estimating what the rest of us would like.

The general point is that we quite often collect and act on rather weak info clues. This could make good info sense. We might be slowly collecting small clues that eventually add up to big clues. Or if we know well which parameters matter the most, it can make sense to act on weak clues; over a lifetime this can add up to net good decisions. When this is what is going on, then people will tend to know many things they cannot explicitly justify. They might have seen a long history of related situations, and have slowly accumulated enough relevant clues to form useful judgments, but not be able to explicitly point to most of those weak clues which were the basis of that judgement.

Another thing I noticed on vacation is that a large fraction of my relatives age 50 or older think that they know that their lives were personally saved by medicine. They can tell of one or more specific episodes where a good doctor did the right thing, and they’d otherwise be dead. But people just can’t on average have this much evidence, since we usually find it hard to see effects of medicine on health even when we have datasets with thousands of people. (I didn’t point this out to them – these beliefs seemed among the ones they held most deeply and passionately.) So clearly this intuitive collection of weak clues stuff goes very wrong sometimes, even on topics where people suffer large personal consequences. It is not just that random errors can show up; there are topics on which our minds are systematically structured, probably on purpose, to greatly mislead us in big ways.

One of the biggest questions we face is thus: when are judgements trustworthy? When can we guess that the intuitive slow accumulation of weak clues by others or ourselves embodies sufficient evidence to be a useful guide to action? At one extreme, one could try to never act on anything less than explicitly well-founded reasoning, but this would usually go very badly; we mostly have no choice but to rely heavily on such intuition. At the other extreme, many people go their whole lives relying almost entirely on their intuitions, and they seem to mostly do okay.

In between, people often act like they rely on intuition except when good solid evidence is presented to the contrary, but they usually rely on their intuition to judge when to look for explicit evidence, and when that is solid enough. So when those intuitions fail the whole process fails.

Prediction markets seem a robust way to cut through this fog; just believe the market prices when available. But people are usually resistant to creating such market prices, probably exactly because doing so might force them to drop treasured intuitive judgements.

On this blog I often present weak clues, relevant to important topics, but by themselves not sufficient to draw strong conclusions. Usually commenters are eager to indignantly point out this fact. Each and every time. But on many topics we have little other choice; until many weak clues are systematically collected into strong clues, weak clues are what we have. And the topics of where our intuitive conclusions are most likely to be systematically biased tend to be those sort of topics. So I’ll continue to struggle to collect whatever clues I can find there.

GD Star Rating
loading...
Tagged as: , ,

Best To Mix Odd, Ordinary

“The best predictor of belief in a conspiracy theory is belief in other conspiracy theories.” … Psychologists say that’s because a conspiracy theory isn’t so much a response to a single event as it is an expression of an overarching worldview. (more; HT Tyler)

Some people just like to be odd. I’ve noticed that those who tend to accept unusual conclusions in one area tend to accept unusual conclusions in other areas too. In addition, they also tend to choose odd topics on which to have opinions, and base their odd conclusions on odd methods, assumptions, and sources. So opinions on odd topics tend to be unusually diverse, and tend to be defended with an unusually wide range of methods and assumptions.

These correlations are mostly mistakes, for the purpose of estimating truth, if they are mainly due to differing personalities. Thus relative to the typical pattern of opinion, you should guess that the truth varies less on unusual topics, and more on usual topics. You should guess that odd methods, sources, and assumptions are neglected on ordinary topics, but overused on odd topics. And you should guess that while on ordinary topics odd conclusions are neglected, on odd topics it is ordinary conclusions that are neglected.

For example, the way to establish a new method or source is to show that it usually gives the same conclusions as old methods and sources. Once established, one can take it seriously in the rare cases where they give different conclusions.

A related point is that if you create a project or organization to pursue a risky unusual goal, as in a startup firm, you should try to be ordinary on most of your project design dimensions. By being conservative on all those other dimensions, you give your risky idea its best possible chance of success.

My recent work has been on a very unusual topic: the social implications of brain emulations. To avoid the above mentioned biases, I thus try to make ordinary assumptions, and to use ordinary methods and sources.

GD Star Rating
loading...
Tagged as: , ,

In Praise Of Ads

As Katja and I discussed in our podcast on ads, most people we know talk as if they hate, revile, and despise ads. They say ads are an evil destructive manipulative force that exists only because big bad firms run the world, and use ads to control us all.

Yet most such folks accept the usual argument that praises news and research for creating under-provided info which is often socially valuable. And a very similar argument applies to ads. By creating more informed consumers, ads induce producers to offer better prices and quality, which benefits other consumers.

This argument can work even if ads are not optimally designed to cram a maximal amount of relevant info into each second or square inch of ads. After all, news and research can be good overall even if most of it isn’t optimally targeted toward info density or social value. Critics note that the style of most most ads differs greatly from the terse no-nonsense textbook, business memo, or government report that many see as the ideal way to efficiently communicate info. But the idea that such styles are the most effective ways to inform most people seems pretty laughable.

While ad critics often argue that ads only rarely convey useful info, academic studies of ads usually find the sort of correlations that you’d expect if ads often conveyed useful product info. For example, there tend to be more ads when ads are more believable, and more ads for new products, for changed products, and for higher quality products.

Many see ads as unwelcome persuasion, changing our beliefs and behaviors contrary to how we want these to change. But given a choice between ad-based and ad-free channels, most usually choose ad-based channels, suggesting that they consider the price and convenience savings of such channels to more than compensate for any lost time or distorted behaviors. Thus most folks mostly approve (relative to their options) of how ads change their behavior.

Many complain that ads inform consumers more about the images and identities associated with products than about intrinsic physical features. We buy identities when we buy products. But what is wrong with this if identities are in fact what consumers want from products? As Katja points out, buying identities is probably greener than buying physical objects.

So why do so many say they hate ads if most accept ad influence and ads add socially-valuable info? One plausible reason is that ads expose our hypocrisies – to admit we like ads is to admit we care a lot about the kinds of things that ads tend to focus on, like sex appeal, and we’d rather think we care more about other things.

Another plausible reason is that we resent our core identities being formed via options offered by big greedy firms who care little for the ideals we espouse. According to our still deeply-embedded forager sensibilities, identities are supposed to be formed via informal interactions between apparently equal allies who share basic values.

But if we accept that people want what they want, and just seek to get them more of that, we should praise ads. Ads inform consumers, which disciplines firms to better get consumers what they want. And if you don’t like what people want, then blame those people, not the ads. Your inability to persuade people to want what you think they should want is mostly your fault. If you can’t get people to like your product, blame them or yourself, not your competition.

Added 10a: Matt at Blunt Object offers more explanations.

GD Star Rating
loading...
Tagged as: , ,

Reasons To Reject

A common story hero in our society is the great innovator, opposed by villains who unthinkingly reject the hero’s proposed innovation, merely because it requires a change from the past. To avoid looking like such villains, most of us give lip service to innovation, and try not to reject proposals just because they require change.

On the other hand, our world is extremely complex, with lots of opaque moving parts. So most of us actually have little idea why most of those parts are they way they are. Thus we usually don’t know much about the effects of adopting any given proposal to change the status quo, other than that it will probably make things worse. Because of this, we need a substantial reason to endorse any such proposal; our default is rejection.

So we are stuck between a rock and a hard place – we want both to reject most proposals, and to avoid seeming to reject them just because they require change, even though we don’t specifically know why they would be bad ideas. Our usual solution: rationalization.

That is, we are in the habit of collecting reasons why things might be bad ideas. There might be inequality or manipulation, the rich might take control, it might lead to war, the environment might get polluted, mistakes might be made, regulators might be corrupted, etc. With a library of reasons to reject in hand, we can do simple pattern matching to find reasons to reject most anything. We can thus continue to pretend to be big fans of innovation, saying that unfortunately in this case there are serious problems.

I see (at least) two signs that suggest this is happening. The first sign is that my students are usually quick to name reasons why any given proposal is a bad idea, but it takes them lots of training to be able to elaborate in any detail why exactly a reason they name would make a proposal bad. For example, if they can identify anything about the proposal that would involve some people knowing secrets that others do not, they are quick to reject a proposal because of “asymmetric information.” But few are ever able to offer a remotely coherent explanation of the harm of any particular secret.

The other sign I see is when people consider the status quo as a proposal, but do not know that it actually is the status quo, they seem just as quick to find reasons why it cannot work, or is a bad idea. This is dramatically different from their eagerness to defend the status quo, when they know it is the status quo. When people don’t know that something actually works now, they assume that it can’t work.

This habit of pattern matching to find easy reasons to reject implies that would-be innovators shouldn’t try that hard to respond to objections. If you compose a solid argument to a particular objection, most people will then just move to one of their many other objections. If you offer solid arguments against 90% of the objections they could raise, they’ll just assume the other 10% holds the reason your proposal is a bad idea. Even having solid responses to all of their objections won’t get you that far, since most folks can’t be bothered to listen to them all, or even notice that you’ve covered them all.

Of course as a would be innovator, you should still listen to objections. But not so much to persuade skeptics, as to test your idea. You should honestly engage objections so that you can refine, or perhaps reject, your proposal. The main reason to listen to those with whom you disagree is: you might be wrong.

GD Star Rating
loading...
Tagged as: ,

Not Science, Not Speculation

I often hear this critique of my em econ talks: “This isn’t hard science, so it is mere speculation, where anyone’s guess is just as good.”

I remember this point of view – it is the flattering story I was taught as a hard science student, that there are only two kinds of knowledge: simple informal intuition, and hard rigorous science:

Informal intuition can help you walk across a street, or manage a grocery list, but it is nearly hopeless on more abstract topics, far from immediate experience and feedback. Intuition there gives religion, mysticism, or worse. Hard science, in contrast, uses a solid scientific method, without which civilization would be impossible. On most subjects, there is little point in arguing if you can’t use hard science – the rest is just pointless speculation. Without science, we should just each user our own intuition.

The most common hard science method is deduction from well-established law, as in physics or chemistry. There are very well-established physical laws, passing millions of empirical tests without failure. Then there are well-known approximations, with solid derivations of their scope. Students of physical science spend years doing problem sets, wherein they practice drawing deductive conclusions from such laws or approximations.

Another standard hard science method is statistical inference. There are well-established likelihood models, well-established rules of thumb about which likelihood models work with which sorts of data, and mathematically proven ways to both draw inferences from data using likelihood models, and to check which models best match any given data. Students of statistics spend years doing problems sets wherein they practice drawing inferences from data.

Since hard science students can see that they are much better at doing problem sets than the lessor mortals around them, and since they know there is no other reliable route to truth, they see that only they know anything worth knowing.

Now, experienced practitioners of most particular science and engineering disciplines actually use a great many methods not reducible to either of these methods. And many of these folks are well aware of this fact. But they are still taught to see the methods they are taught as the only reliable route to truth, and to see social sciences and humanities, which use other methods, as hopeless delusional, wolves of intuition in sheep’s clothing of apparent expertise.

I implicitly believed this flattering story as a hard science student. But over time I learned that it is quite wrong. Humans and their civilizations have collected a great many methods that improve on simple unaided intuition, and today in many disciplines and fields of expertise the experienced and studied have far stronger capacities than the inexperienced and unstudied. And these useful such methods are not remotely we’ll summarized as formal statistical inference or deduction from well-established laws.

In economics, the discipline I know best, we often use deduction and statistical inference, and many of our models look at first glance like approximations derived from well-established fundamental results. But our well-established results have many empirical anomalies, and are often close to tautologies. We often have only weak reasons to expect many common model assumptions. Nevertheless, we know lots, much embodied in knowing when which models are how useful.

Our civilization gains much from our grand division of labor, where we specialize in learning different skills. But a cost is that it can take a lot of work to evaluate those who specialize in other fields. It just won’t do to presume that only those who use your methods know anything. Much better is to learn to become expert in another field in the same way others do; but this is usually way too expensive.

Of course, I don’t mean to claim that all specialists are actually valuable to the rest of us. There probably are many fraudulent fields, best abolished and forgotten, or at least greatly reformed. But there just isn’t a fast easy way to figure out which are those fields. You can’t usually identify a criminal just by their shifty eyes; you usually have look at concrete evidence of crime. Similarly, you can’t convict a field of fraud based on your feeling that their methods seem shifty. You’ll have to look at the details.

GD Star Rating
loading...
Tagged as: , ,

Why Am I Weird?

It will not have escaped the notice of long-time readers that I have a number of unusual intellectual views and priorities. In fact, more such views than most intellectuals.

This doesn’t usually bother me, but it should. After all, different theories about my weirdness lead to very different rational responses to my opinions, by myself and by others. Consider some theories:

  1. An unusually sloppy thinker, I make more big mistakes in reasoning.
  2. Unusually insightful, I have many unusual insights.
  3. Especially good at making up reasons, I seek an excuse to show off my reasoning, and so take positions that others will ask me to justify.
  4. Feeling unfairly low status, I hope for a status reversal via bragging later that I held popular opinions when they were unpopular
  5. Being especially proud, I’m unwilling to just accept standard views, and insist on thinking through all interesting topics through for myself. This leads to many contrarian views, since it leads to many views.
  6. Being unusually risk-taking, I collect opinions with a small chance of leading me to great fame and glory.
  7. Being unusually desiring of attention, positive or negative, I say things that will make people pay attention to me.
  8. Being especially good at a particular unusual sort of reasoning, e.g., very abstract concepts, I draw conclusions that neglect other sorts.
  9. Being especially uninterested in the usual rewards given intellectuals, I pick acts more likely to gain other rewards.
  10. Having initially learned an unusual mix of skills and topics, I apply that mix to produce unusual conclusions.

I’m sure many of you can think of more such theories (which I’ll add as suggested). But, after all these years, why don’t I know? Why don’t I care more? And, those of you who are also weird, why don’t you know, or care, why?

GD Star Rating
loading...
Tagged as: ,

Freakonomics On Consulting

Me in January on Too Much Consulting?:

Last night I discussed the popularity of law, finance, and management consulting with Tyler and many somewhat-libertarian-leaning others. I was surprised that most were skeptical that firms get their money’s worth from consulting, more skeptical than for law or finance. I was also surprised that most focused on explaining why kids from elite schools work at such firms, rather than on why firms pay so much for this consulting.

My explanation:

The CEO often understands what needs to be done, but does not have the resources to fight this blocking coalition. But if a prestigious outside consulting firm weighs in, that can turn the status tide.

Freakonomics Radio interviewed me about it a bit later, and they’ve just put up a podcast they say was “inspired in part” by my post. In addition to me, they talk to Keith Yost, a former consultant:

Fellow consultants and associates … [said] fifty percent of the job is nodding your head at whatever’s being said, thirty percent of it is just sort of looking good, and the other twenty percent is raising an objection but then if you meet resistance, then dropping it.

and Christopher McKenna, Oxford business historian:

They divide the roles into two parts. The first part is the one that we tend to understand the best and the one that we tend to think of in the most positive terms, and that is that they bring advice to a firm that doesn’t otherwise have it. … The second thing that they provide is legitimacy, and that’s the one that seems a little bit strange. So you’ve made a decision or you think you might know what you’d like to do about entering those markets or making a new product. And instead of just going ahead and doing it, you hire the consultants to confirm what you already thought. And those consultants come in and they say yes you’re right, or even imagine you’re having a political fight within the firm and both sides hire consultants and in effect they both produce reports, and somebody wins that fight with the help of that extra amount of knowledge from outside.

and Nick Bloom, Stanford economist:

So there are really two types of consulting. There’s operational consulting, you know, down on the factory floor, in the shop type improvements. That’s probably ninety-five percent of the industry. Most of it is done by firms you’ve never heard of. And those guys are very much like seasoned, gnarly, ex-manufacturing managers that have spent twenty years working in Ford and are real experts, and are now getting paid as consultants to hand out advice. That stuff typically has pretty big impact because you’re paying someone to give them long-earned advice. And then there’s the very small elite end, strategy consulting, about five percent. And that’s much more helping CEOs make big decisions.

Bloom did a randomized trial in India of the first type of consulting, and found that it gave great value. But on the other type, which is what I think Yost, McKenna, I, and my dinner companions were discussing, the only positive evidence the show offers is cohost Steve Levitt saying that as a consultant he sure felt he added value:

My own experience has been that even though I know nothing about an industry, if you give me a week, and you get a bunch of really smart people to explain the industry to me, and to tell me what they do, a lot of times what I’ve learned in economics, what I’ve learned in other places can actually be really helpful in changing the way that they see the world.

And how can you argue with data like that?

GD Star Rating
loading...
Tagged as: , ,