Tell Your Anti-Story

Fiction is not only not real, it differs from reality in systematic ways.  For example,  characters in novels, plays, TV tend to be more attractive, articulate, expressive, and principled than real people.  Now we also like to tell stories about ourselves and the events we see around us.  These stories are more constrained by the facts we see than fictional stories, but I suspect they suffer from similar biases.  That is, I suggest we have a fiction bias:

Whatever we like or expect to see in fiction, relative to reality, we are also biased to like or expect to see in our lives.  

So, for example, we tend to see ourselves and the people around us as more attractive, articulate, expressive, and principled than they really are.  If true, my hypothesis (which I can’t believe is original) offers a powerful way to identify and correct our biases:  Find ways in which fiction tends to deviate from reality, and then move your estimates of reality in the other direction

For example, it seems to me that teen romp movies tend to portray parents and teachers as inept, clueless, sexually repressed, but ready to help when help is wanted.  If so, teens should realize that parents and teachers probably know more, are more sexually satisfied, but less available to help, than teens realize.  We should be able to find hundreds of other applications, such as using the standard biases of science fiction.  Are there any important exceptions to this general trend? 

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • LemmusLemmus

    I’d put this a different way: Given that the human mind tends to think about reality in a certain way, this is also the way that a) comes naturally to fiction writers when they write a story, b) people like to see in stories.

  • Hopefully Anonymous

    Robin,
    Great post. I think this is a very promising approach to identifying overlooked biases.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Be very wary whenever you try to achieve intelligence by reversing stupidity. To be wrong 99% of the time on binary problems would take an ultrapowerful intelligence just to generate that degree of error. You can’t derive useful work from entropy or useful information from noise – you can’t achieve intelligence by reversing stupidity unless, somehow, the stupid agent has already integrated all the evidence an intelligent agent would need to attend to.

    I think the overall idea here is that people are integrating up the evidence to arrive at a basically correct answer, then adding the biases on top. So we can subtract off the biases to get a correct answer. In general, we certainly want to be aware of the bias and prevent it from entering our system. But maybe the conclusion we want is, “Hold on a second, people arrived at their whole opinions by making up a good-sounding story instead of attending to the evidence! And they didn’t pay enough attention to reality to be corrected by it, either. Now we’re going to have to look at teachers and parents to find out what they’re really like, and integrate all the evidence from scratch.”

    That would be my basic objection to the idea of antibiasing – when a bias has gone through and been allowed to survive, you should suspect all the thinking done by the same agent or agent collective, rather than assuming the bias was an isolated mistake that can be subtracted directly off the surface to yield a correct answer.

    Now, with all that said, sometimes we can only correct our errors one piece at a time. Sometimes all we can do is subtract off the biases one piece at a time. It certainly makes sense to wonder if adults might be more knowledgeable, more satisfied, and less available to help, than teens realize; but that’s more the status of a hypothesis-suggesting heuristic than hypothesis-verifying evidence. So I’d basically keep the post as is, but substitute “wonder if” or “check whether” for “realize that” in “teens should realize that…”

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Eliezer, I’d say that “subtract off the biases” can improve any process that produces an estimate. Of course you might improve the estimates even more by using the pattern of biases to identify and eliminate some core cause of the biases. But don’t let the best be the enemy of the good.

  • http://michaelkenny.blogspot.com Mike Kenny

    Very stimulating post. Some possible “fiction biases” to use Robin Hanson’s phrase:

    -“Meaningfulness” bias: seeing things as meaningful, as if written by a cosmic author with a plan for you, or who is signaling to you that some reality exists beyond your obvious knowledge.

    -“Plot arc” bias: thinking that your life will follow a standard plot with a beginning, a complication, and a resolution.

    -“Good guys always win” bias: self-explanatory.

    -“Good guy/bad guy dichotomy”: believing there are only good guys and bad guys.

    Those seem like some obvious possibilities. I wonder if some people have biases that are associated with one genre rather than another. For example, someone who likes noir might have something like a “cynical-world bias”, that is, he or she might see the world as filled solely with cynical actors. What fiction you like might suggest what biases you’d be likely to have, and the fiction might actually strengthen those biases.

  • http://www.godofthemachine.com Aaron Haspel

    “Fiction bias” might be too broad. What seems more relevant is the sort of fiction that you, particuarly, like. Teen romp movies may portray teachers as clueless, sexually repressed, and helpful, but if you can’t stand such movies you’re unlikely to have to correct for their biases. As Mike Kenny points out, noir fans may have a cynicism bias, sci-fi fans a techno-utopian bias, and so forth. Then there are certain characteristics nearly all fiction shares (a protagonist, a resolution), against which David Balan warned here several months back. But again, you may not like fiction at all, so you may not suffer from those either.

  • Hopefully Anonymous

    I think this could be a very fruitful avenue both for looking at personal and systematic biases. I like the elaboration on the topic by Mike and Aaron. Although, Aaron, I don’t think fiction bias is necessarilly limited in relevance to the sort of fiction a particular individual likes. It may reveal substantial insights into broad social biases, too.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Mike, yes those are good suggestions.

    Aaron, yes, we probably fall more for the biases of the genres we find familiar.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Robin: Agreed.

    “Stories have beginnings, middles, ends, heroes, villains, clarity, resolution. Life has none of those things.”
    — O’Donnell’s Law of History
    http://www.edge.org/q2004/page7.html

  • Hopefully Anonymous

    Similarly, it might be fruitful to look at timing biases in story-telling. The idea that a natural resolution arc for topics can be 1/2 hour in sitcoms, 1 hour in dramas, 90 minutes to 3 hours in movies, and whatever the equivalents may be for plays and novels.

    Also, I’m interested in results gleaned from neuroscientists and cultural anthropologists that look at this stuff.

  • http://stockmarketbeat.com Trent

    It is also important to distinguish between reality and realism. I don’t think that the stories we tell are “more constrained by the facts we see than fictional stories,” as fiction readers will quickly lose interest in a story that is too fanciful, as you point out in the science fiction article.

    Good fiction tends to be more believable than reality. In real life people do things that make no sense, and don’t always have a motive for doing so. Randomness does not make for a good story.

    Likewise, consider the uproar over “A Million Little Pieces.” Readers were upset that this “memoir” was actually fiction. Why? Isn’t an enjoyable story enjoyable whether true or fiction? The fact is, the readers were happier accepting a badly-told story when they thought it was “true” than if they had thought it was fiction. Fiction is held to a higher quality standard.

  • http://outlawpoet.blogspot.com Justin Corwin

    It’s not clear to me how precisely such fictional biases are identified. After all, I notice that in most fiction, people breathe, have four major appendages, spend the majority of their time on status games with other people, and often make mistakes that cost them dearly. How am I to distinguish these facts from your biases? By analysis? Of fiction, which may not be internally consistent?

    It seems to offer a volume to search within, but no search procedure. Intuitively appealing, because we know how much within this space is wrong, but not why it’s wrong, or how it’s wrong.

    I worry that too many decisions and processes in this blog are presented as quantitative estimates, with biases as simple, constant deflection in the values. Life is nonlinear and multidimensional, and I find that each type of bias requires separate procedures and counter-biases. Just knowing that something is not so does not help you to determine what is so, in most cases. How does a teen correct for their biased view of teachers? By assuming they are just like them? By counter-assuming in some structured way? I’m with Eliezer, as a procedure, this kind of thing is likely to introduce it’s own problems.

  • http://profile.typekey.com/nickbostrom/ Nick Bostrom

    “If true, my hypothesis (which I can’t believe is original) offers a powerful way to identify and correct our biases: Find ways in which fiction tends to deviate from reality, and then move your estimates of reality in the other direction.”

    This idea seems similiar to what I called “Good-story bias” in my paper on existential risks written back in 2001 (quote from section 8.5 below). There must almost certaintly be earlier cites for this idea, although I’m not aware of any.

    “Suppose our intuitions about which future scenarios are “plausible and realistic” are shaped by what we see on TV and in movies and what we read in novels. (After all, a large part of the discourse about the future that people encounter is in the form of fiction and other recreational contexts.) We should then, when thinking critically, suspect our intuitions of being biased in the direction of overestimating the probability of those scenarios that make for a good story, since such scenarios will seem much more familiar and more “real”. This Good-story bias could be quite powerful. When was the last time you saw a movie about humankind suddenly going extinct (without warning and without being replaced by some other civilization)? While this scenario may be much more probable than a scenario in which human heroes successfully repel an invasion of monsters or robot warriors, it wouldn’t be much fun to watch. So we don’t see many stories of that kind. If we are not careful, we can be mislead into believing that the boring scenario is too farfetched to be worth taking seriously. In general, if we think there is a Good-story bias, we may upon reflection want to increase our credence in boring hypotheses and decrease our credence in interesting, dramatic hypotheses. The net effect would be to redistribute probability among existential risks in favor of those that seem to harder to fit into a selling narrative, and possibly to increase the probability of the existential risks as a group.”

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Nick, a great example.

    Justin, the fact that “people breathe, have four major appendages” in fiction is not a pattern that distinguishes fiction from reality, and so it does not trigger my proposed heuristic.

  • http://julesandjames.blogspot.com/ James Annan

    I expect (and even enjoy, within limits) to see plotless cops ‘n’ robbers stories (with violence and multiple murder) in fiction, but I wouldn’t like to be caught up in one, and I’ve never seen one. What do I win?

  • michael vassar

    My main concern here is that it seems plausible in the case of most biases that we already have biologically or culturally evolved “counter-biases” such that the net deviation of belief or attitude from rationality is not directionally known. For instance, it seems plausible to me that much of what we may tend to call “culture”, “education”, “hard-nosed realism”, “liberalism”, etc designate the installation of biases intended to compensate other biases. The result is that if you consciously choose to bias yourself in order to oppose some bias it is very likely that the decision is going to tend to push you away from those biases you are most bothered by, and thus away from those biases which are probably already opposed by the strongest supposed counter biases.

  • Hopefully Anonymous

    Michael, there may be functional counter-biases but not necessarily optimized counter-biases, which I think is what Robin is going for. I really like this functional vs. optimal analytical framework as a counter to status quo bias.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Michael, to the extent that eliminating biases was the main pressures driving “culture”, “education”, etc. changes, your argument would be a devastating response. To the extent that such systems have been driven to achieve other conflicting goals, however, there should be room to sacrifice those goals in order to reduce bias.

  • Matt S

    I think fiction might actually unbias. Imagine a child has a somewhat clueless teacher, but not so clueless that the child can see it easily (assume the child has been told by his parents that teachers must be smart). After seeing a movie with a fictional teacher who is obviously clueless, the child might be able to judge his real teacher more accurately. Of course the child must remember what is fiction, and he must not automatically equate the real teacher and the fictional teacher, but nonetheless because of the movie he can now think more broadly than before. It is possible that without fiction, this child might never encounter a sufficiently clueless teacher to make him to question his incorrect assumption.
    In general, it seems like topics that are treated by fiction might be less suspectible to bias because we become aware of more possibilities and increase our range of experience.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Matt, it is certainly true that fiction might help us to see reality better, by giving is vivid examples of things somewhat outside the usual range of experience. But even if fiction does perform this function, there remains the question of whether our perceptions of reality are biased toward fictional tendencies.

  • michael vassar

    Great point Matt!

    Robin: Lets propose a model where unbiasing is only a weak function of education, culture, etc (they certainly work at least partially to eliminate, say, the bias to perceive objects and forces as alive and intentional). Lets posit a vector in human mind configuration space in which the verbal and ritual environment (lets call that C for ‘culture) pushes a person in a direction with an average loading against the direction of an average bias of .3 and a SD for this loading of .2. Assume that the vector is strong enough to eliminate 80% of the average bias in the average person, and that like most forms of educational achievement that it’s strength grows exponentially with IQ over a 5-fold range from an IQ of about 90 to one of about 120. Extrapolating this multiplier out a bit further, once IQ gets to the mid 130s one has a new set of cultural biases in addition to one’s initial endowment of biases. The average strength of bias is substantially lower than it initially was, and the direction of one’s new bias vector is essentially random relative to that of the initial bias vector.
    Data suggestive of education speed multiplier effects from IQ beyond the 99th percentile is seriously lacking (though research showing school achievement after multiple grade accelerations tracking with IQ expectations rather than age expectations indicates that it might). The model above does suggest something interesting however, namely that further increases in IQ would tend to increase net bias by increasing the strength of vector C past the point where it’s effect was net-neutral. The frequent observation that IQ seems to constitute a bag of mixed benefits of different magnitudes up to about this level but seems to be systematically associated with certain weaknesses above about this level (possibly a slightly lower level, but hey, I made up the numbers in the model without evidence, surely they are weak approximations).

  • Hopefully Anonymous

    Michael,
    I see how you got to bias reduction as one reaches the 99th% in IQ (access to further education/culture derived anti-bias tools/instruction?), but can you elaborate on why your model suggests bias may increase as one goes beyond the 99th% in IQ? Because I don’t see that. If anything, I’d think a 99.99% IQ person could model how a 99th% IQ person thinks, and thus perform anti-bias analysis of the world to the degree that it would help them in outcome optimization.

    For example, I don’t see James Simon or Sergey Brin as more prone to bias than a median competency family physician.

  • michael vassar

    Hopefully: A few points.
    First, my model only predicts increasing bias in so far as the relationship between intelligence, culture, and bias is as I suggested and only so far as the relationship between the characteristics found at very high IQ (about which we have limited data) and high IQ is the same as that between high and average or average and low IQ. It does so simply by magnifying the C vector which initially has a net bias reducing effect past the point where it’s effect is net bias reducing (though it may still be bias reducing in some components). Niagara is vaguely North of New York City, so going North will take me closer to Niagara up to a point and then farther from Niagara.

    Second: Higher IQ means, basically, accelerated learning and improved pattern detection. (the pattern detection helps accelerate the learning by enabling the patterns to be learned to initially be detected). These *can* help one to model how people think via psychology, in so far as psychology contains accurate models, but the patterns found in human behavior, or even in the behavior of fairly simple animals are basically FAR too subtle for casual recognition using general hardware at any human IQ. For detecting such patterns without becoming entangled in a heap of superstitions you need to either use the scientific method, which is basically a slow collective enterprise, or to use dedicated hardware which already implicitly contains the patterns you are looking for. In the latter case, you either use the pattern detection that evolution has already done for you or you model others *as* yourself and then use “theory of mind” to make very crude modifications. When starting from a self-model and using “theory of mind”, the more different you are from the person being modeled the more modification you have to make. Large IQ differences are one of several major classes of difference that make this more difficult. No evidence that I am aware of indicates that higher IQ makes a person’s “theory of mind” significantly more powerful (why would we expect it to any more than it makes, say, vision more acute) while the existence of high-functioning autism strongly suggests that general capabilities are only crudely able to substitute for it. Evidence from Williams Syndrome suggests that just the use of dedicated hardware can be surprisingly effective.

    Third: James Simon and Sergey Brin are not individuals who you selected as exemplars of high IQ, but rather as exemplars of business success specifically in fields likely to benefit from low bias (finance and start-ups). Gates dropped out of college, but that doesn’t mean that the probability of dropping out of college increases with IQ.

  • Hopefully Anonymous

    Michael, thanks for the detailed response. My intuitive doubts that minimum bias peaks at some point well below the highest IQ levels (such as at the 99% level)remain, but I appreciate your thorough and clear explanation of your thoughts on the topic and how you arrived at them.

  • michael vassar

    Hopefully: Keep in mind that I am talking about minimum aggregate bias by some undefined metric as a result of the C vector. Specific biases will reach nadir at levels of learning speed that depend on the degree to which C points away from natural biases. Where C actually aligns with natural biases, as for instance in the scientific racism of the early 20th century, it is entirely plausible along my model that the least biased will be the least cultured, which might mean something similar to the least intelligent individuals without organic retardation. For instance, “learned professors” and the like might have rejected the plausibility of Fredrick Douglas actually writing his essays due to their supposedly scientific belief that a black man could not plausibly have attained that level of verbal fluency. Earlier still, young earth scientific theories would have probably lead to some threshold beyond which more intelligent people rejected the theory of evolution at a greater rate than less intelligent people who understood the basic idea of evolution but not the more complex content of thermodynamics did. It is possible that some group of people might build evidence based models by which they can personally debias themselves in a more fine-tuned fashion than is done by C. Such a group, if small, might lead to the existence of a low bias outliers with respect to certain actively confronted biases at some IQ level far above the low-bias peak generated by C, but would not lead to the average level of bias at the group’s average IQ being substantially lower than that at the C generated peak.

    Anyway, it doesn’t seem to me that other people are still participating on this thread and my wrists limit my typed words per day. Call me if you want to discuss this more.

  • Hopefully Anonymous

    Michael,
    Frankly your examples are causing me to doubt your theory. I don’t think increasing intelligence beyond the 99th% would make an early 20th century individual more susceptible to actually believe scientific racism theories, although I grant that it might make people of average or moderately above average intelligence more susceptible to those theories that people a standard deviation in intelligence or more below. It might make them more likely to perform belief in scientific racism, which is different from actual belief (although it may be difficult for observers to separate performance from actual belief, perhaps increasingly difficult as the performer becomes more intelligent).

    I know we’ve had a continuing disagreement on this topic, but I think it has been fruitful.

    The bottom line in my opinion is it doesn’t take much empirical inquiry or critical thinking to cast doubt on many elements of scientific racism, and Frederick Douglass was hardly either inaccessible or a lone agent as a person of african descent from which one could determine intelligence variance in that population. I’m not even sure how scientific racists addressed intelligence variance within as opposed to between members of racial groups in that era, but I doubt it becomes something harder to consider as the assessor’s IQ exceeds the 99th%.

    So, I think having an IQ significantly above the 99th% would only make it easier to see logical flaws and empirical weaknesses as underpinnings to fashionable scientific theories of a given era -which would argue for an increasingly lower actual bias (even if they may perform a higher bias than people of 99th%, for example, to outcome optimization as a relatively smart person in the midst of a biased population).

  • lw

    This is crazy. Fairy tales, epic poetry, and other fiction that predates copyright often encapsulates a great deal of folk wisdom. Aesop, Grimm, and Homer are all fiction, right? There’s a spectrum with such works at one end and hacks cranking out work for profit at the other. The insights of hacks into collective belief might be useful to identify bias, but it’s a pretty limited probe.

  • michael vassar

    Hopefully: I don’t think you followed my example. “Where C actually aligns with natural biases, as for instance in the scientific racism of the early 20th century, it is entirely plausible along my model that the least biased will be the least cultured, which might mean something similar to the least intelligent individuals without organic retardation.”
    In other words, no persistent disagreement here, my hypothesis coincides with yours.

    Extremely high IQ people who also have certain personality traits such as high openness and low agreeableness may be much more likely than the population norm to acquire expertise in the identification of logical flaws and consequentially to reject illogical or empirically trivially disprovable theories, but there are abundant data confirming, for instance, high NAZI party participation among intellectuals (especially in Austria) strongly suggesting that unusual personality as well as intelligence is required to reject cultural superstitions.

  • Hopefully Anonymous

    Michael, once again your examples are causing me to doubt your hypothesis. In this case of NAZI party participation by intellectuals in Austria. First I don’t see much acknowledgment of the distinction between performance of belief/bias and actual belief/bias. I think a good test is the fluidity of external performance by these intellectuals post-NAZI regime. If for example, they were demonstrably quicker to power-align with new regimes post-NAZI reich than the general population, that would be an indication that their NAZI alignment was performative and that a higher IQ allowed them to be more fluid and adaptable performers. It’s an empirical question which is probably researchable.

    I’m (hopefully) anonymous, but in my non-anonymous life I think it would be in my interest to publicly perform a belief system (including biases) which is normatively likeable, at least for the social circles and situations in which it is optimal for me to be liked. And the smarter I am (the better I am at pattern recognition, etc.) the better I think I’d be at these performances. Although I’ll grant your point that relatively optimized hardware (cocktail party personality) can often trump a even a very positively deviantly good ability to analyze patterns to innovate social interaction strategies (or whatever it is one would do to apply one’s high IQ to the challenges of situational social success).

  • TGGP

    I’m with Hopefully Anonymous, though I can’t really contribute much more beyond what he said. I suppose that the way I imagine the relationship between IQ and unbiasedness is one with an asymptote rather than a maximum anywhere.

  • http://vintermann.paranoidkoala.org Harald Korneliussen

    To get away from the rather silly IQ theories here (a couple of suggestions only: people who want to join high-IQ societies may be a queer bunch, high-IQ societies were formed and run by a certain kind of people, and it’s not hard to increase your score on IQ tests greatly with a little practice) I offer the following suggestions of fiction bias:

    Anti-happiness bias: It’s a happy family in a sunny village, the kids are singing, the parents are waving … there must be something rotten underneath, to be gradually unraveled! Either that or they will meet a horrible fate pretty soon. I think traditional ways of achieving happiness are seriously undervalued because of this trend in fiction.

    Dark secret bias: Did you know Hitler was really a jew? Well, OK, he wasn’t, but it has been claimed more than once. And just now I saw a discussion board where people discussed a murderer who had killed a gay man, and what was the suggestion? “I bet he was a supressed gay himself!”. Because they always are in the stories (wasn’t there some homophobia=closet gay logic in a recent movie?)

  • Rob

    @Harold’s post.

    I think both those biases are really just components of possibly the most annoying, stupid bias in modern thinking: twists. Because there is a twist in almost every single novel, film and TV show, people now seem to expect them in life. It’s almost as if they’re disappointed when they get to the end of existence and all their dead friends don’t arrive and tell them it was all just the product of a terrible, highly-advanced AI hell-bent on world domination, and now is the time to act.

  • Rob

    Although could twists be called a bias? The Pro-Twist bias… I need to get to bed.

  • Grognor

    Hero wins bias: people tend to overestimate their own chances of success. I would be SHOCKED if this didn’t come at least SOMEWHAT from fiction.

    Also its inverse: society sucks bias: a disproportionately negative view of the world due to (and I realize this is non-fiction!) over-reporting of bad news and under-reporting of good news. News media actually filter out good news because it gets less ratings.

  • Pingback: [Ydw] Καθιστώντας την ιστορία διαθέσιμη | On the way to Ithaca

  • aquis

    I don’t think you can get accuracy by just reversing this bias. Remember, the bias in favor of fiction could also influence behavior to more closely match fiction. This is why people sometimes think about what their favorite fictional hero would do as a guide for making decisions.

  • Pingback: Why We Love Stories And What To Do About It - Kraut Science