Real Rationality

Bayesian probability is a great model of rationality that gets lots of important things right, but there are two ways in which its simple version, the one that comes most easily to mind, is extremely misleading.

One way is that it is too easy to assume that all our thoughts are conscious – in fact we are aware of only a tiny fraction of what goes on in our minds, perhaps only one part in a thousand. We have to deal with not only “running on error-prone hardware”, but worse, relying on purposely misleading inputs. Our subconscious often makes coordinated efforts to mislead us on particular topics.

But at least many folks are aware of and try to deal with this; for example, I’ve seen a lot of good related posts on this at Less Wrong lately. There is, however an even bigger way in which the simple Bayesian model is extremely misleading, and I’ve seen no discussion of it at Less Wrong. We may see one part in a thousand of our minds, but that fraction pales by comparison to the fact that we are each only one part in seven billion of living humanity.

Taking this fact seriously requires even bigger changes to how we think about rationality. OK, we don’t need to consider it for topics that only we can influence. But for most interesting important topics, it matters far more what the entire world does than what we personally do. For such topics, rationality consists mainly in the world having and using good systems (academia, news media, wikipedia, prediction markets, etc.) for generating and distributing reliable beliefs on which everyone can act.

When seven billion minds are involved, the overwhelming consideration must be managing a division of labor, so that we don’t each have to redo the same work. Together we must manage systems for deciding who should be heard on what. Given such systems, each of us will make our strongest contributions, by far, by fitting into these systems.

So to promote rationality on interesting important topics, your overwhelming consideration simply must be: on what topics will the world’s systems for deciding who to hear on what listen substantially to you?   Your efforts to ponder and make progress will be largely wasted if you focus on topics where none of the world’s “who to hear on what” systems rate you as someone worth hearing.  You must not only find something worth saying, but also something that will be heard.

Yes, existing who-to-hear systems are far from perfect, but that fact simply does not make it rational for you to work on topics where a better system would approve you, if only such systems existed. Wishes are not horses. It might make sense for you to work on reforming our systems, but even then your best efforts will work through channels where current systems can rate you as a person to hear on that meta topic.

When what matters is how the world acts, not how you act, rationality on your part consists mainly in improving the rationality of the world’s beliefs, as determined by its main systems for deciding who to believe about what.  Just wishing we had other systems, or acting as if we had them, is delusion, not rationality.

From a conversation with Steve Rayhawk.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • http://broadoakblog.blogspot.com Sackerson

    We now see that the economic system does not work for the benefit of all, because it can be steered and exploited by a tiny minority; ditto the political system. The news and entertainment media operate as a form of distraction. Sanity – if that means attending to your own personal good – amounts to little more than sauve qui peut.

    For many, there may be very little than can be done for oneself, by oneself alone – one cannot suddenly re-create lost jobs or sell one’s house for a good price in an economically doomed area. This is the point at which some will simply give up hope, and others will become prepared to sacrifice themselves for the common good, or the hope of a good that can only be achieved collectively – which is how revolutions begin, including the American Revolution.

    When it becomes clear that a strictly selfish approach to one’s affairs is fruitless (and given our inevitably limited lifespan, from a longer view this always applies), it is not irrational to identify with the interests of some larger group – those with some degree of common genetic inheritance, say – one’s progeny, one’s siblings, other blood relations, perhaps even the human race in general.

    • http://www.rationalmechanisms.com richard silliker

      economic system.

      Think of system in terms of metabolism. When you do you will come to realize the current “system” is practicing cannibalism and therefore is not rational. Rational: that which adheres to the rules that iteratively bind Form, Function, Cause and Effect into the Universe

  • Norman

    Is this your way of announcing that you’re shutting down the blog to pursue more rational efforts? 😉

  • http://www.weidai.com Wei Dai

    So to promote rationality on interesting important topics, your overwhelming consideration simply must be: on what topics will the world’s systems for deciding who to hear on what listen substantially to me?

    Or you can always choose topics that are self-verifying. For example if you came up with a valid proof that P!=NP, I’m pretty sure you won’t be ignored no matter who you are.

    Does anyone have some examples of what Robin is talking about? Whose work is being ignored, and what should he or she have worked on instead?

    • http://hanson.gmu.edu Robin Hanson

      Our systems for updating pure math beliefs do have weak channels for assimilating new short proofs from complete unknowns. Topic-wise, this is quite exceptional.

      • http://www.weidai.com Wei Dai

        It seems to me that either self-verifying topics are more common than that, or our “who to listen to” systems work better than you think. Take this for example. How did a mailing list posting by an uncredentialed and unaffiliated person (I was an undergrad at the time), describing a cryptographic protocol (not exactly pure math), gather eighty-some citations and get described as “influential” by review papers?

        I haven’t had too much trouble finding an audience for my later works (on largely unrelated topics, like my C++ crypto library, and decision theory) either, despite not having gotten any impressive credentials or affiliations since graduating from college. Was I just lucky that my efforts weren’t wasted?

    • Tyrrell McAllister

      It’s true that almost anyone could convince the world that P!=NP with the right proof. But the prover would have little control over how the world acted on this belief. It’s hard to see someone having significant influence of that kind without getting a prior blessing from the world’s “who to hear on what” systems.

  • Aron

    “For such topics, rationality consists mainly in the world having and using good systems..”

    Between this and the last AI post.. kudos. These systems are the leading intelligences on the planet, not individuals. If an AI (or anyone) wanted to take over, it would have to be more than simply superhuman, but super-institutional.

    If you were selecting a target *human* personality to replicate for an emulation that gave you the best chance of taking over the world, it would be a business or political genius who can master networks. Better yet, you would drop the concept of simulating a human, and simulate an institution like ‘the US government at all levels’. But then even this institution, as well as it does on its given tasks, can only do a relatively few things extremely well, because general intelligence is a myth.

  • http://timtyler.org/ Tim Tyler

    Re: “Bayesian probability is a great model of rationality that gets lots of important things right, but there are two ways in which its simple version, the one that comes most easily to mind, is extremely misleading.”

    I don’t get it. Is this simple, wrong version that supposedly treats everything as conscious codified somewhere? Surely Bayesian probability is not “about” that issue.

    • Jef Allbright

      Bayesian methods are absolutely correct–within context. And in the domain of affairs of human interest you seldom if ever have an effectively complete context.

      Much of what Robin has been exploring lately maps pretty well onto more developed thinking in the field of evolutionary computing, genetic programming, hierarchical Bayesian optimization, etc.

    • mjgeddes

      Lacking the information contained in the rest of your mind, you are vulnerable to bias. Generalizing to the rest of the world, lacking the information contained in all other minds, you are vulnerable to anthropic selection effects. Bias is just a special case of anthropics!

      This was graphically illustrated in the Derren Brown episode on ‘The System’, where a member of the public was astonished to be receiving repeated correct predictions for horse racing, with the winning horses sometimes coming from impossible positions and getting astonishing good luck in the running. She was tricked by confiirmation bias because she only saw her own perspective. ‘Panning out’ to the wider ‘people-scape’ revealed the secret – anthropics – Derren was simultaneously sending tips to over 7000 people covering all possible combinations and weeding out the losers – only the film from the one remaining person that by chance received all the right tips was used.

      That which controls the wider ‘system’ in which Bayes is embedded warps the probabilities by manipulating what other people see, this is the ‘killer’ flaw in Bayes.

      .

  • Agent00yak

    This seems like a response to Mendacious Moldbug’s criticisms of Robin acting like a better than average but still typical academic. Robin is indirectly telling MM that he is acting irrationally because no one important is ever going to listen to him.

  • Agent00yak

    And I spelled that wrong, I meant “Mencius Moldbug” at UR.

  • Pingback: uberVU - social comments

  • http://www.takeonit.com Ben Albahari

    This post has echoes of another one of Robin’s articles, which I also thought was really insightful, “The Myth of Creativity”:

    http://hanson.gmu.edu/press/BusinessWeek-7-3-06.htm

  • Josh

    It may be useful to distinguish two things both of which you have described in this entry: (1) delegating the work of rational thinking to a large number of thinkers, i.e. dividing labor, and (2) distributing reliable beliefs to “the world”.

    Sometimes private companies divide the labor of research among many scientists and then keep the results of that research a secret. This is an example of (1) without (2).

    The task (2) of convincing as many people as possible of the truth of something is not the same as the task (1) of discovering the truth. I recognize that if you are unable to (2) convince many of the truth of your claims then this is evidence that you are wrong, and therefore that the task (2) of convincing others can be useful as a test which helps you to (1) discover the truth. So (2) can help with (1). But it is not necessary. (1) can be done without (2). They are still distinct.

    In principle a system might be possible similar to the Netflix recommendation system. Rather than assigning a single score to a movie, netflix assigns a score tailored to you based on what else you like. Imagine something like this for research. This would allow fools to embrace astrology, numerology, and other nonsense all the more effectively, but at the same time allow those who are no fools to all the more effectively embrace rationality. This would provide (1) to those who choose wisely while failing to provide (2).

  • http://www.takeonit.com Ben Albahari

    We’re working on an expert opinion recommendation system similar to the one you outlined. Here’s the post on Less Wrong that explains it:

    http://lesswrong.com/lw/1kl/takeonit_database_of_expert_opinions/1f6n

  • Matthew

    Professor Hanson wrote: “We have to deal with not only “running on error-prone hardware”, but worse, relying on purposely misleading inputs. Our subconscious often makes coordinated efforts to mislead us on particular topics.”

    I agree with the sentiment being expressed here: how to be effective with so much noise. The average faculty member doesn’t appreciate this quote in your experience Professor Hanson?

    If you have a list of citations/reviews I would greatly appreciate it. I plan on presenting such a viewpoint and its logical applications at a skeptics meetings at my university.

    • Jess Riedel

      The point is that it’s not just random noise.

      • Matthew

        I understand why you would think that.

  • Jeremy Huffman

    Your efforts to ponder and make progress will be largely wasted if you focus on topics where none of the world’s “who to hear on what” systems rate you as someone worth hearing. You must not only find something worth saying, but also something that will be heard.

    A lot of people told this to Ramanujan (although a lot encouraged him as well). Indian clerks writing letters prefacing pages and pages of mathematics to English mathematics professors at Cambridge at the beginning of the 20th century would not have been seen by some as even a remotely possible way to get the world’s attention; and it nearly didn’t as several of the first ones to receive letters simply brushed them off – it was all too far from the mainstream and advanced in such idiomatic ways that it wasn’t obvious without careful attention and analysis that he had some very interesting proofs and results.

    Its hard to know if Ramanujan would have ever been discovered without the lucky happenstance of G.H. Hardy actually working through those papers but I think he likely would have been – his intellect and work was simply too compelling for everyone to ignore it forever.

  • A dude

    this is the basis for a simple rule where I apply a mental spam filter to anything that has “should” (as opposed “will”) in the title.

  • Bill

    I’m a little troubled by your comment:

    “Together we must manage systems for deciding who should be heard on what. Given such systems, each of us will make our strongest contributions, by far, by fitting into these systems.”

    Chinese leaders might be interested in managing systems for deciding who should be heard on what, though.

    • Steve

      Bill, do you think that the current systems for deciding who should be heard on what are optimally managed?

  • http://rationalmechanisms.com DWCrmcm

    Rational is defined by The RMCM as that which arises directly out of Cause, Effect, Function, and Form©.
    The RMCM is indifferent to the word “systems” as it has become meaningless in the search for greater truth. This Exercise should at least give us some common encapsulations.

  • http://entitledtoanopinion.wordpress.com TGGP

    Robin made a similar argument a while back at LessWrong: Rational Me or We?

    I’m also reminded of a point often made at Boettke’s blog (formerly known as The Austrian Economists) about how Austrians should seek to get work published in high-impact non-Austrian journals so as to better spread Austrian ideas.

    Agent00yak, the same joke was already made by n/a at r/h/e notes. In fact, I suspect you stole the pun from him.

  • ravi hegde

    The cognitive biases you see in individuals will not likely go completely away when you increase the scale. Although it will cancel out many biases, it will not be able to correct errors when and if the biases correlate. The errors are also likely to be severe in magnitude when and if they do occur. But I agree that a systematic study of cognitive biases in a group of agents who themselves have biases is important.

  • http://nimbupani.com Divya

    I had to read this post twice to understand what Robin meant 😀

    But, Robin, how to influence this organism? History does show the system can be influenced (women’s rights, green energy, equal rights for non-whites, human rights, etc). Is it just a matter of luck, or diligent ant work so that when “time is right” the opinion tilts?

    (If you have written about it earlier, I am sorry to ask you to repeat!)

  • tom

    Justoneminute had a good post on the first part of the subject–the small part of ourselves that we control. It’s about a metaphor likening us to a boy riding an elephant (except that we may not even know we are riding the elephant or that there is an elephant):

  • tom
  • http://www.viewsflow.com David Smith

    I’m sympathetic to Bill’s point of view up above.

  • Matt Fraser

    This assumes that you care about being heard and making a difference. Personally these things make no difference to me; I just want to understand the world as much as I can. If it turns out that I come across a way of understanding it that is new and other people take notice of it then great!

    • Steve

      Matt, epistemic rationality may be a subset of instrumental rationality, or it may be different, but naively confusing the two does nobody any favors.