When Disciplines Disagree

Our new book, The Elephant in the Brain, can be seen as taking one side in a disagreement between disciplines. On one side are psychologists (among others) who say of course people try to spin their motives as being higher than they are, especially in public forums. People on this side find our basic book thesis, and our many specific examples, so plausible that they fear our book may be too derivative and unoriginal.

On the other side, however, are most experts in concrete policy analysis. They spend their time studying ways that schools could help people to learn more material, hospitals could help people get healthier, charities could better assist people in need, and so on. They thus implicitly accept the usual claims people make about what they are trying to achieve via schools, hospitals, charities, etc. And so the practice of policy experts disagrees a lot with our claims that people actually care more about other ends, and that this is why most people show so little interest in reforms proposed by policy experts. (The world shows great interest in new kinds of physical devices and software, but far less interest in most proposed social reforms.)

My first book The Age of Em can also be seen as expressing disagreement between disciplines. In that book I try to straightforwardly apply standard economics to the scenario where brain emulations are the first kind of AI to displace most all human workers. While the assumption of brain-emulation-based-AI seems completely standard and reasonable among large communities of futurists and technologists, it is seen as radical and doubtful in many other intellectual communities (including economics). And many in disciplines outside of economics are quite skeptical that economists know much of anything that can generalize outside of our particular social world.

Now if you are going to make claims with which whole disciplines of experts disagree, you should probably feel most comfortable doing so when you have at least a whole discipline supporting you. Then it isn’t just you the crazy outlier against a world of experts. Even so, this sort of situation is problematic, in part because disagreements usually don’t become debates. A book on one side of a disagreement between disciplines is usually ignored by the disciplines who disagree. And the disciplines that agree may also ignore it, if the result seems too obvious to them to be worth discussing within their discipline.

This sort of situation seems to me one of the worse failings of our intellectual world. We fail to generate a consistent consensus across the widest scope of topics. Smaller communities of experts often generate a temporary consistent consensus within each community, but these communities often disagree a lot at larger scopes. And then they mostly just ignore each other. Apparently experts and their patrons have little incentive to debate those from other disciplines who disagree.

When two disciplines disagree, you might think they would both turn especially to the people who have become experts in both disciplines. But in fact those people are usually ignored relative to the people who have the highest status within each discipline. If we generated our consensus via prediction markets, it would automatically be consistent across the widest scope of topics. But of course we don’t, and there’s little interest in moving in that direction.

GD Star Rating
Tagged as: , , ,
Trackback URL:
  • 1)

    If we generated our consensus via prediction markets, it would automatically be consistent across the widest scope of topics.

    Has this been demonstrated? Say you have two separate prediction markets, available to folks from two distinct places. Will the two markets converge on the same likelihoods? (I’ll note that British betting markets provide a significantly higher likelihood of Trump being impeached than prediction markets.)

    2) On ems. One commenter on Amazon posted in a lengthy review that you failed to consider the use of virtual reality as punishment. (Punishment in virtual reality is this season the main theme of Black Mirror.) Why won’t cyber-hells become a prevalent punishment in the age of em? The prospect might be intimidating enough, don’t you think, to enforce a singleton regime?

    • It is arbitrage that produces consistency, which results from people who can trade in both markets.

      It has long been possible to torture humans at low cost, and I don’t see how virtual reality changes the situation substantially.

      • It is arbitrage that produces consistency, which results from people who can trade in both markets.

        Misses my point. If without trading between markets, two otherwise identical markets produce sufficiently discrepant results (despite both having sufficiently many bettors), then there is no good reason to take either estimate or their average as having a privileged status. They’re each clearly wrong, and there’s no point in averaging them if you have no idea what makes them wrong.

        This is basic convergent validation of any measure.

        As to whether virtual reality changes the reality of torture. Perhaps it’s unclear whether a qualitatively higher level of torture (practically eternal with the ability to run an emulation at high speed) changes anything. But torture is much easier if it can be performed without any interaction with the victim. (See Collins’s recent book on violence.)

    • davidmanheim

      > A reason, if a rather abstract one, not to expect convergence is that “the probability that (e)” where e is an event (like Trump getting impeached) is not any kind of objective fact about the world.

      No. Aumann agreement theorem (given subjective bayesian approaches to probability) show that people should converge on their probability estimates of future events.

      • Whether they “should” converge is not exactly relevant – if they don’t in fact. (Why aren’t British bettors bothered by the different odds on American prediction markets?)

        But, yes, what I’m saying is a sort of challenge to Bayesianism. Probabiliy estimates only should converge if there is in fact a unique probability attached to the event in question. This is a condition that (I claim) is only approximated sometimes, when we say there is risk rather than when we call it uncertainty. (I am rejecting Bayesian probability when there is no coherent objective probability involved. (See “Epistemological implications of a reduction of theoretical implausibility to cognitive dissonance” – http://juridicalcoherence.blogspot.com/2017/07/272-epistemological-implications-of.html )

        An example. Let’s say we have a prediction market on the result of the role of a die. Will “1” come up? However, no one is told, and no one can find out, how many sides the die has. To avoid problems, let’s say the die toss is a simulation of randomness, and the die might have any number of sides, from 1 to a trillion.

        We set up two distinct and separate prediction markets for this event. Would the two markets converge? No reason they should. (I’d guess that random events in the betting history would determine the end result.) With complete uncertainty there is no convergence. With large uncertainty there is little convergence.

      • davidmanheim

        The key point here is that “not any kind of objective fact about the world” isn’t a coherent category for deciding whether probability estimates will or will not converge. Your objection applies just as easily to “What was the total number of people who visited Canterbury, defined as entering the boundary between midnight and midnight, on July 7, 1832.”

        That’s clearly an objective fact, it’s just uncertain. And obviously actual humans will not converge on an answer, but Aumann agreement shows very clearly that rational Bayesians must do so.

  • Peter Watt

    Your first two paragraphs strike a particularly strong chord with me. I spend much of my time lecturing to public sector workers on policy. Usually, on short courses, I need to maintain the pretense that course members’ overriding concern in life is progress on the policy objectives we are discussing. If not, the whole course becomes about whether I am crazy or they are crazy. Nevertheless I do usually manage to get course members to discuss some things like “on-the-job leisure”.

    I see academics in disiplines, including my own (economics) as being like stall-holders at a craft fair setting out their wares and looking for customers. Some of the stalls seem to be purveying complete lunacy. However as long as there are enough in the area to approve of each other’s papers, that does not seem to prevent them from doing a good trade.And economics is not above such criticism. Live-and-let-live seems to be the route to being a happy academic although peace can be disturbed by a group of non-economists whose principal specialism is “why economics is wrong”. (To them: “Get yout own subject why don’t you”).

    It would be refreshing if inter-disciplinary differnces could be tested in something like a tennis tournament (one-handed versus two-handed backhand etc). The publication tournament is pretty strange Does new technology mean that truth-verifying merchants could become viable?.

    • Yes, policy people pretend that all they care about is good policy in the usual sense. Yes, people show shop at the academic stalls don’t seem to care much of other stalls offer contradictory advice. Apparently they will present whatever advice they buy to other people who usually won’t be aware of those contradictory stalls.

      • Peter Watt

        I’m a big fan of your work – very gratified to receive a response. Thank you.

    • Ryan Reynolds


      I think the live and let live analogy is the best one here. Within the theme of live and let live as well – even if compelling evidence was produced in an academic paper which proved one school of thought was wrong, I don’t think this would change anyone’s incentives. If you think of a school of thought as a business, ideas can find audiences across long distances and across time. Even the wildest conspiracy theories still find audiences, however small. And proof can always be contested, ignored, or undermined in different ways.

      Also – big professional services firms can do the same thing, for the same set of reasons. One client wants advice from an urban planner/bureaucrat, another client wants advice from an economist/financial advisor, and they can offer conflicting advice. Crazy but true.

      • Peter Watt

        Thank you for your well-thought-out comment on my comment.

  • Pingback: Rational Feed – deluks917()

  • davidmanheim

    I will argue that this is in part based on a different dispute, not a lack of consensus. Policy people in fact believe that there is a normative good in many of the stated goals, and (clumsily) attempt to achieve those goals even at the cost of the unstated hidden motives. You point out why this won’t work, but that doesn’t eliminate the fundamental disagreement about whether we should attempt to build policies that help people achieve their hidden motive / revealed preference goals, or their stated but ignored goals.

    I’ll go further, and veer into the deeply murky and frequently useless waters of philosophy. If we care about the conscious observer portion of people, we *should* aim to achieved the stated but post-hoc rationalizations, despite the fact that the people have other “true” motivations. That’s because we don’t necessarily care that the non-conscious portion of people’s brains have different goals. If, on the other hand, we think of people as consistent/coherent agents, we want to help them fulfill their unstated true goals.

    • I’d say policy analysts aren’t today cleverly trying to give people what they say they want while taking into account that they really want other things. They are just blindly assuming that people want what they say they want. Thus there is in fact a real disagreement.

      • davidmanheim

        I certainly agree with your points about the failures to achieve synthesis across disciplines, but I think the idea that there is a real difference and failure is somewhat of a weak-man argument.

        For example, it is clearly standard policy analysis practice to pick revealed preference over stated values for policy design – and that can implicitly account for some hidden motives even if they don’t clearly notice or mention them. Of course, I agree it would be valuable to call the hidden motives their attention more explicitly (as you have been doing.)

        HOWEVER, the fact that you don’t see many policy analysts making the points publicly isn’t necessarily because they don’t appreciate them, but can be (and I would say at least partially is) because they can’t state the fact that they are targeting hidden motives without compromising their ability to publicly advocate for a policy. And I make that claim based on the fact that I know political scientists and psychologists working in policy who have said things to that effect about particular policies. But again, I’d agree that clarifying the hidden motives and the effects on policy decision making further is valuable, even if it’s not discussed publicly.

  • Roger Williams

    Isn’t the last paragraph an example of the very phenomena you are describing? Apologies if that is a point you are making that was meant to be subtle and I am just explicitly stating something already expressed sufficiently for others to intuit.

    • I’m not following you.

      • Fleshy506

        I’m guessing Roger meant that expecting different communities of experts to strive to resolve inconsistencies with each other (as part of optimizing for discovering truth) is like expecting schools to optimize for education, hospitals to optimize for healing people, etc., as opposed to optimizing for the unstated goals you talk about.

        So, ironically, if most practitioners of some given academic discipline are disinclined to take hidden motives seriously, despite the fact that hidden motives are taken seriously in other disciplines, that would itself constitute evidence that those very people are driven by hidden motives. And that seems to be the subtext of your post, now that I think about it.

        Maybe it’s good to make that point explicitly, because leaving it as subtext kind of made your post read to me like an attempt by you to claim higher status for yourself and your field relative to other academic fields. (Were you conscious of that? I honestly can’t tell. Man, this shit is insidious!)

      • Roger Williams

        That is said far better than my poor attempt to make that point. Thank you!

      • Yes, part of the reason disciplines don’t work out their disagreements is that the function of academia is different from what academia usually say.

  • Pingback: *Elephant in the Brain* — what is really going on in this book? – The Snarky Report()

  • Michael Goldstein

    Got a masters degree in public policy at the Kennedy School of Government, 20 years ago. (Haven’t been impressed with most ed policy analysis then or since).

    Question – what are implications of Elephant for such academic programs? Banish? Change somehow?