Tag Archives: Disagreement

Imagine Farmer Rights

Yesterday I criticized proposals by George Dvorsky and Anders Sandberg to give rights to ems by saying that random rights are bad. That is, rights limit options, which is usually bad, so those who argue for specific rights should offer specific reasons why the rights they propose are exceptional cases where limiting options helps strategically. I illustrated this principle with the example of a diner’s bill of rights.

One possible counter argument is that these proposed em rights are not random; they tend to ensure ems can keep having stuff most of us now have and like. I agree that their proposals do fit this pattern. But the issue is whether rights are random with respect to the set of cases where strategic gains come by limiting options. Do we have reasons to think that strategic benefits tend to come from giving ems the right to preserve industry era lifestyle features?

To help us think about this, I suggest we consider whether we industry era folks would benefit had farmer era folks imposed farmer rights, i.e., rights to ensure that industry era folks could keep things most farmers had and liked. For example, imagine we today had “farmer rights” to:

  1. Work in the open with fresh air and sun.
  2. See how all  food is grown and prepared.
  3. Nights outside are usually quiet and dark.
  4. Quickly get to a mile-long all-nature walk.
  5. All one meets are folks one knows, or known by them.
  6. Easily take apart devices, to see materials, mechanisms.
  7. Authorities with clear answers on cosmology, morality.
  8. Severe punishment of heretics who contradict authorities.
  9. Prior generations quickly make room for new generations.
  10. Rule by a king of our ethnicity, with clear inheritance.
  11. Visible deference from nearby authority-declared inferiors.
  12. More?

Would our lives today be better or worse because of such rights?

Added: I expect to hear this response:

Farmer era folks were wrong about what lifestyles help humans flourish, while we industry era folks are right. This is why their rights would have been bad for us, but our rights would be good for ems.

GD Star Rating
loading...
Tagged as: , , , ,

What Do We Know That We Can’t Say?

I’ve been vacationing with family this week, and (re-) noticed a few things. When we played miniature golf, the winners were treated as if shown to be better skilled than previously thought, even though score differences were statistically insignificant. Same for arcade shooter contests. We also picked which Mexican place to eat at based on one person saying they had eaten there once and it was ok, even though given how random is who likes what when, that was unlikely to be a statistically significant difference for estimating what the rest of us would like.

The general point is that we quite often collect and act on rather weak info clues. This could make good info sense. We might be slowly collecting small clues that eventually add up to big clues. Or if we know well which parameters matter the most, it can make sense to act on weak clues; over a lifetime this can add up to net good decisions. When this is what is going on, then people will tend to know many things they cannot explicitly justify. They might have seen a long history of related situations, and have slowly accumulated enough relevant clues to form useful judgments, but not be able to explicitly point to most of those weak clues which were the basis of that judgement.

Another thing I noticed on vacation is that a large fraction of my relatives age 50 or older think that they know that their lives were personally saved by medicine. They can tell of one or more specific episodes where a good doctor did the right thing, and they’d otherwise be dead. But people just can’t on average have this much evidence, since we usually find it hard to see effects of medicine on health even when we have datasets with thousands of people. (I didn’t point this out to them – these beliefs seemed among the ones they held most deeply and passionately.) So clearly this intuitive collection of weak clues stuff goes very wrong sometimes, even on topics where people suffer large personal consequences. It is not just that random errors can show up; there are topics on which our minds are systematically structured, probably on purpose, to greatly mislead us in big ways.

One of the biggest questions we face is thus: when are judgements trustworthy? When can we guess that the intuitive slow accumulation of weak clues by others or ourselves embodies sufficient evidence to be a useful guide to action? At one extreme, one could try to never act on anything less than explicitly well-founded reasoning, but this would usually go very badly; we mostly have no choice but to rely heavily on such intuition. At the other extreme, many people go their whole lives relying almost entirely on their intuitions, and they seem to mostly do okay.

In between, people often act like they rely on intuition except when good solid evidence is presented to the contrary, but they usually rely on their intuition to judge when to look for explicit evidence, and when that is solid enough. So when those intuitions fail the whole process fails.

Prediction markets seem a robust way to cut through this fog; just believe the market prices when available. But people are usually resistant to creating such market prices, probably exactly because doing so might force them to drop treasured intuitive judgements.

On this blog I often present weak clues, relevant to important topics, but by themselves not sufficient to draw strong conclusions. Usually commenters are eager to indignantly point out this fact. Each and every time. But on many topics we have little other choice; until many weak clues are systematically collected into strong clues, weak clues are what we have. And the topics of where our intuitive conclusions are most likely to be systematically biased tend to be those sort of topics. So I’ll continue to struggle to collect whatever clues I can find there.

GD Star Rating
loading...
Tagged as: , ,

Best To Mix Odd, Ordinary

“The best predictor of belief in a conspiracy theory is belief in other conspiracy theories.” … Psychologists say that’s because a conspiracy theory isn’t so much a response to a single event as it is an expression of an overarching worldview. (more; HT Tyler)

Some people just like to be odd. I’ve noticed that those who tend to accept unusual conclusions in one area tend to accept unusual conclusions in other areas too. In addition, they also tend to choose odd topics on which to have opinions, and base their odd conclusions on odd methods, assumptions, and sources. So opinions on odd topics tend to be unusually diverse, and tend to be defended with an unusually wide range of methods and assumptions.

These correlations are mostly mistakes, for the purpose of estimating truth, if they are mainly due to differing personalities. Thus relative to the typical pattern of opinion, you should guess that the truth varies less on unusual topics, and more on usual topics. You should guess that odd methods, sources, and assumptions are neglected on ordinary topics, but overused on odd topics. And you should guess that while on ordinary topics odd conclusions are neglected, on odd topics it is ordinary conclusions that are neglected.

For example, the way to establish a new method or source is to show that it usually gives the same conclusions as old methods and sources. Once established, one can take it seriously in the rare cases where they give different conclusions.

A related point is that if you create a project or organization to pursue a risky unusual goal, as in a startup firm, you should try to be ordinary on most of your project design dimensions. By being conservative on all those other dimensions, you give your risky idea its best possible chance of success.

My recent work has been on a very unusual topic: the social implications of brain emulations. To avoid the above mentioned biases, I thus try to make ordinary assumptions, and to use ordinary methods and sources.

GD Star Rating
loading...
Tagged as: , ,

In Praise Of Ads

As Katja and I discussed in our podcast on ads, most people we know talk as if they hate, revile, and despise ads. They say ads are an evil destructive manipulative force that exists only because big bad firms run the world, and use ads to control us all.

Yet most such folks accept the usual argument that praises news and research for creating under-provided info which is often socially valuable. And a very similar argument applies to ads. By creating more informed consumers, ads induce producers to offer better prices and quality, which benefits other consumers.

This argument can work even if ads are not optimally designed to cram a maximal amount of relevant info into each second or square inch of ads. After all, news and research can be good overall even if most of it isn’t optimally targeted toward info density or social value. Critics note that the style of most most ads differs greatly from the terse no-nonsense textbook, business memo, or government report that many see as the ideal way to efficiently communicate info. But the idea that such styles are the most effective ways to inform most people seems pretty laughable.

While ad critics often argue that ads only rarely convey useful info, academic studies of ads usually find the sort of correlations that you’d expect if ads often conveyed useful product info. For example, there tend to be more ads when ads are more believable, and more ads for new products, for changed products, and for higher quality products.

Many see ads as unwelcome persuasion, changing our beliefs and behaviors contrary to how we want these to change. But given a choice between ad-based and ad-free channels, most usually choose ad-based channels, suggesting that they consider the price and convenience savings of such channels to more than compensate for any lost time or distorted behaviors. Thus most folks mostly approve (relative to their options) of how ads change their behavior.

Many complain that ads inform consumers more about the images and identities associated with products than about intrinsic physical features. We buy identities when we buy products. But what is wrong with this if identities are in fact what consumers want from products? As Katja points out, buying identities is probably greener than buying physical objects.

So why do so many say they hate ads if most accept ad influence and ads add socially-valuable info? One plausible reason is that ads expose our hypocrisies – to admit we like ads is to admit we care a lot about the kinds of things that ads tend to focus on, like sex appeal, and we’d rather think we care more about other things.

Another plausible reason is that we resent our core identities being formed via options offered by big greedy firms who care little for the ideals we espouse. According to our still deeply-embedded forager sensibilities, identities are supposed to be formed via informal interactions between apparently equal allies who share basic values.

But if we accept that people want what they want, and just seek to get them more of that, we should praise ads. Ads inform consumers, which disciplines firms to better get consumers what they want. And if you don’t like what people want, then blame those people, not the ads. Your inability to persuade people to want what you think they should want is mostly your fault. If you can’t get people to like your product, blame them or yourself, not your competition.

Added 10a: Matt at Blunt Object offers more explanations.

GD Star Rating
loading...
Tagged as: , ,

Reasons To Reject

A common story hero in our society is the great innovator, opposed by villains who unthinkingly reject the hero’s proposed innovation, merely because it requires a change from the past. To avoid looking like such villains, most of us give lip service to innovation, and try not to reject proposals just because they require change.

On the other hand, our world is extremely complex, with lots of opaque moving parts. So most of us actually have little idea why most of those parts are they way they are. Thus we usually don’t know much about the effects of adopting any given proposal to change the status quo, other than that it will probably make things worse. Because of this, we need a substantial reason to endorse any such proposal; our default is rejection.

So we are stuck between a rock and a hard place – we want both to reject most proposals, and to avoid seeming to reject them just because they require change, even though we don’t specifically know why they would be bad ideas. Our usual solution: rationalization.

That is, we are in the habit of collecting reasons why things might be bad ideas. There might be inequality or manipulation, the rich might take control, it might lead to war, the environment might get polluted, mistakes might be made, regulators might be corrupted, etc. With a library of reasons to reject in hand, we can do simple pattern matching to find reasons to reject most anything. We can thus continue to pretend to be big fans of innovation, saying that unfortunately in this case there are serious problems.

I see (at least) two signs that suggest this is happening. The first sign is that my students are usually quick to name reasons why any given proposal is a bad idea, but it takes them lots of training to be able to elaborate in any detail why exactly a reason they name would make a proposal bad. For example, if they can identify anything about the proposal that would involve some people knowing secrets that others do not, they are quick to reject a proposal because of “asymmetric information.” But few are ever able to offer a remotely coherent explanation of the harm of any particular secret.

The other sign I see is when people consider the status quo as a proposal, but do not know that it actually is the status quo, they seem just as quick to find reasons why it cannot work, or is a bad idea. This is dramatically different from their eagerness to defend the status quo, when they know it is the status quo. When people don’t know that something actually works now, they assume that it can’t work.

This habit of pattern matching to find easy reasons to reject implies that would-be innovators shouldn’t try that hard to respond to objections. If you compose a solid argument to a particular objection, most people will then just move to one of their many other objections. If you offer solid arguments against 90% of the objections they could raise, they’ll just assume the other 10% holds the reason your proposal is a bad idea. Even having solid responses to all of their objections won’t get you that far, since most folks can’t be bothered to listen to them all, or even notice that you’ve covered them all.

Of course as a would be innovator, you should still listen to objections. But not so much to persuade skeptics, as to test your idea. You should honestly engage objections so that you can refine, or perhaps reject, your proposal. The main reason to listen to those with whom you disagree is: you might be wrong.

GD Star Rating
loading...
Tagged as: ,

Not Science, Not Speculation

I often hear this critique of my em econ talks: “This isn’t hard science, so it is mere speculation, where anyone’s guess is just as good.”

I remember this point of view – it is the flattering story I was taught as a hard science student, that there are only two kinds of knowledge: simple informal intuition, and hard rigorous science:

Informal intuition can help you walk across a street, or manage a grocery list, but it is nearly hopeless on more abstract topics, far from immediate experience and feedback. Intuition there gives religion, mysticism, or worse. Hard science, in contrast, uses a solid scientific method, without which civilization would be impossible. On most subjects, there is little point in arguing if you can’t use hard science – the rest is just pointless speculation. Without science, we should just each user our own intuition.

The most common hard science method is deduction from well-established law, as in physics or chemistry. There are very well-established physical laws, passing millions of empirical tests without failure. Then there are well-known approximations, with solid derivations of their scope. Students of physical science spend years doing problem sets, wherein they practice drawing deductive conclusions from such laws or approximations.

Another standard hard science method is statistical inference. There are well-established likelihood models, well-established rules of thumb about which likelihood models work with which sorts of data, and mathematically proven ways to both draw inferences from data using likelihood models, and to check which models best match any given data. Students of statistics spend years doing problems sets wherein they practice drawing inferences from data.

Since hard science students can see that they are much better at doing problem sets than the lessor mortals around them, and since they know there is no other reliable route to truth, they see that only they know anything worth knowing.

Now, experienced practitioners of most particular science and engineering disciplines actually use a great many methods not reducible to either of these methods. And many of these folks are well aware of this fact. But they are still taught to see the methods they are taught as the only reliable route to truth, and to see social sciences and humanities, which use other methods, as hopeless delusional, wolves of intuition in sheep’s clothing of apparent expertise.

I implicitly believed this flattering story as a hard science student. But over time I learned that it is quite wrong. Humans and their civilizations have collected a great many methods that improve on simple unaided intuition, and today in many disciplines and fields of expertise the experienced and studied have far stronger capacities than the inexperienced and unstudied. And these useful such methods are not remotely we’ll summarized as formal statistical inference or deduction from well-established laws.

In economics, the discipline I know best, we often use deduction and statistical inference, and many of our models look at first glance like approximations derived from well-established fundamental results. But our well-established results have many empirical anomalies, and are often close to tautologies. We often have only weak reasons to expect many common model assumptions. Nevertheless, we know lots, much embodied in knowing when which models are how useful.

Our civilization gains much from our grand division of labor, where we specialize in learning different skills. But a cost is that it can take a lot of work to evaluate those who specialize in other fields. It just won’t do to presume that only those who use your methods know anything. Much better is to learn to become expert in another field in the same way others do; but this is usually way too expensive.

Of course, I don’t mean to claim that all specialists are actually valuable to the rest of us. There probably are many fraudulent fields, best abolished and forgotten, or at least greatly reformed. But there just isn’t a fast easy way to figure out which are those fields. You can’t usually identify a criminal just by their shifty eyes; you usually have look at concrete evidence of crime. Similarly, you can’t convict a field of fraud based on your feeling that their methods seem shifty. You’ll have to look at the details.

GD Star Rating
loading...
Tagged as: , ,

Why Am I Weird?

It will not have escaped the notice of long-time readers that I have a number of unusual intellectual views and priorities. In fact, more such views than most intellectuals.

This doesn’t usually bother me, but it should. After all, different theories about my weirdness lead to very different rational responses to my opinions, by myself and by others. Consider some theories:

  1. An unusually sloppy thinker, I make more big mistakes in reasoning.
  2. Unusually insightful, I have many unusual insights.
  3. Especially good at making up reasons, I seek an excuse to show off my reasoning, and so take positions that others will ask me to justify.
  4. Feeling unfairly low status, I hope for a status reversal via bragging later that I held popular opinions when they were unpopular
  5. Being especially proud, I’m unwilling to just accept standard views, and insist on thinking through all interesting topics through for myself. This leads to many contrarian views, since it leads to many views.
  6. Being unusually risk-taking, I collect opinions with a small chance of leading me to great fame and glory.
  7. Being unusually desiring of attention, positive or negative, I say things that will make people pay attention to me.
  8. Being especially good at a particular unusual sort of reasoning, e.g., very abstract concepts, I draw conclusions that neglect other sorts.
  9. Being especially uninterested in the usual rewards given intellectuals, I pick acts more likely to gain other rewards.
  10. Having initially learned an unusual mix of skills and topics, I apply that mix to produce unusual conclusions.

I’m sure many of you can think of more such theories (which I’ll add as suggested). But, after all these years, why don’t I know? Why don’t I care more? And, those of you who are also weird, why don’t you know, or care, why?

GD Star Rating
loading...
Tagged as: ,

Freakonomics On Consulting

Me in January on Too Much Consulting?:

Last night I discussed the popularity of law, finance, and management consulting with Tyler and many somewhat-libertarian-leaning others. I was surprised that most were skeptical that firms get their money’s worth from consulting, more skeptical than for law or finance. I was also surprised that most focused on explaining why kids from elite schools work at such firms, rather than on why firms pay so much for this consulting.

My explanation:

The CEO often understands what needs to be done, but does not have the resources to fight this blocking coalition. But if a prestigious outside consulting firm weighs in, that can turn the status tide.

Freakonomics Radio interviewed me about it a bit later, and they’ve just put up a podcast they say was “inspired in part” by my post. In addition to me, they talk to Keith Yost, a former consultant:

Fellow consultants and associates … [said] fifty percent of the job is nodding your head at whatever’s being said, thirty percent of it is just sort of looking good, and the other twenty percent is raising an objection but then if you meet resistance, then dropping it.

and Christopher McKenna, Oxford business historian:

They divide the roles into two parts. The first part is the one that we tend to understand the best and the one that we tend to think of in the most positive terms, and that is that they bring advice to a firm that doesn’t otherwise have it. … The second thing that they provide is legitimacy, and that’s the one that seems a little bit strange. So you’ve made a decision or you think you might know what you’d like to do about entering those markets or making a new product. And instead of just going ahead and doing it, you hire the consultants to confirm what you already thought. And those consultants come in and they say yes you’re right, or even imagine you’re having a political fight within the firm and both sides hire consultants and in effect they both produce reports, and somebody wins that fight with the help of that extra amount of knowledge from outside.

and Nick Bloom, Stanford economist:

So there are really two types of consulting. There’s operational consulting, you know, down on the factory floor, in the shop type improvements. That’s probably ninety-five percent of the industry. Most of it is done by firms you’ve never heard of. And those guys are very much like seasoned, gnarly, ex-manufacturing managers that have spent twenty years working in Ford and are real experts, and are now getting paid as consultants to hand out advice. That stuff typically has pretty big impact because you’re paying someone to give them long-earned advice. And then there’s the very small elite end, strategy consulting, about five percent. And that’s much more helping CEOs make big decisions.

Bloom did a randomized trial in India of the first type of consulting, and found that it gave great value. But on the other type, which is what I think Yost, McKenna, I, and my dinner companions were discussing, the only positive evidence the show offers is cohost Steve Levitt saying that as a consultant he sure felt he added value:

My own experience has been that even though I know nothing about an industry, if you give me a week, and you get a bunch of really smart people to explain the industry to me, and to tell me what they do, a lot of times what I’ve learned in economics, what I’ve learned in other places can actually be really helpful in changing the way that they see the world.

And how can you argue with data like that?

GD Star Rating
loading...
Tagged as: , ,

Why Not Pre-Books?

I’m planning to write a book, a book I want to both be engaging to a wide audience, and to adequately defend some complex non-obvious intellectual claims. It feels quite daunting to write with both of these goals in mind at once. So I’m thinking of achieving these two goals in two steps. First I’d write a pre-book, which states my main claims and arguments directly and clearly, using expert language, for an expert audience. I’d then circulate that pre-book privately among experts and useful thinkers of various sorts, seeking criticism of my arguments. Then using their feedback, I’d revise my claims and arguments, and write an engaging accessible book that can be circulated widely.

While this strategy seems to make sense, I rarely hear of anyone doing it. Why? Some possible explanations:

  1. Lots of writers do this; they just don’t let it be known, as that makes them seem unconfident.
  2. Most writers think they know what experts will think about each opinion they will express, and see little value in getting expert feedback on the package of opinions they will express.
  3. A pre-book nearly doubles a writer’s effort, and few writers of accessible books are willing to do this just to get a more intellectually defensible argument.
  4. Far fewer experts are willing to comment on a private pre-book than are willing to publicly criticize a published book. The main way to get feedback is to publish things.
  5. The readers of the pre-book will be offended that their feedback don’t much change the writer’s opinions.
  6. If the pre-book is circulated too widely, that will cut too far into the book sales.
  7. Critics with access to the pre-book might embarrass the author by pointing the many changes of opinion in the book.
  8. Good writers don’t find it very hard to simultaneously write both defensibly and accessibly.
  9. Writers choose a book concept based on what they think will sell. Getting expert feedback on a pre-book might change author opinions too much, making it harder to sincerely write the initial book concept.
GD Star Rating
loading...
Tagged as: , ,

Info Ideology

What is a political “ideology”? You might think your ideology is your set of core pivotal beliefs, the few beliefs that most influence your many other political beliefs. For example:

Political ideologies have two dimensions:
Goals: how society should work
Methods: the most appropriate ways to achieve the ideal arrangement.
… Typically, each ideology contains certain ideas on what it considers to be the best form of government (e.g. democracy, theocracy, caliphate etc.), and the best economic system (e.g. capitalism, socialism, etc.). … Ideologies also identify themselves by their position on the political spectrum (such as the left, the center or the right), though this is very often controversial. (more)

But in fact, political ideologies seem more to be the beliefs that most consistently divide us:

For the most part, congressional voting is uni-dimensional, with most of the variation in voting patterns explained by placement along the liberal-conservative first dimension. … since the 1970s, party delegations in Congress have become ideologically homogeneous and [more] distant from one another (a phenomenon known as “polarization.”) … [These] scores are also used by popular media outlets … as a measure of the political ideology of political institutions and elected officials or candidates. … [These] procedures … have also been applied to a number of other legislative bodies besides the United States Congress. These include the United Nations General Assembly, the European Parliament, National Assemblies in Latin America, and the French Fourth Republic. … Most of these analyses produce the finding that roll call voting is organized by only few dimensions (usually two): “These findings suggest that the need to form parliamentary majorities limits dimensionality.” (more)

It is a remarkable fact that a single dimension so well summarizes political opinions, especially given the range of topics relevant to politics. This, however, is not plausibly explained by saying that we mainly disagree about one core key belief, such as how much redistribution is fair. It instead seems to reflect how political coalitions form – groups tend to form alliances more with closer groups, against more distant groups, until two main alliances form, divided by their one strongest division, whatever that might be.

To the extent that the main political dimensions are associated with policies, they are mostly associated with lots of particular policies, instead of a few key principles. And this makes sense given that most voters seem incapable of comprehending and reliably applying most proposed political principles.

But if there really were sensible pivotal principles, and if the relevant political population could understand and apply them, then it would make sense to focus our political arguments on them. By aggregating info on a few key principles, we would more efficiently aggregate info on lots of specific policies.

So do sensible and pivotal political principles exist? To me, principles like maximize liberty or minimize inequality seem pivotal, but not very sensible. I’m more fond of the principle of economic efficiency, but it is pretty hard for ordinary voters to see what more specific policies this principle implies.

To me, the most sensible pivotal principles are at the meta level — they are about how exactly we should aggregate info on the efficiency, and other consequences, of policies. For example, I think decision markets can go a long way toward giving us better info on the effects of policies. I also think we should do a lot more randomized policy experiments. And I support more and better cost benefit analyses, though it is admittedly hard for ordinary voters to evaluate their objectivity.

Now these positions might be wrong, but whatever are the right answers, the question of how to best aggregate info on policy effects seems a pivotal core issue, with strong implications for many specific policies. Amid audiences that can understand them, these are the core issues about which we should argue. Info ideologies would be the best ideologies.

GD Star Rating
loading...
Tagged as: ,