Problem, No Solution Taboo?

Three years ago I described the “What if Failure Taboo”:

A simple moral principle: when a future change is framed as a problem which we might hope our political system to solve, then the only acceptable reason to talk about the consequences of failing to solve that problem is to scare folks into trying harder to solve it. If you instead assume that politics will fail to solve the problem, and analyze the consequences of that in more detail, not to scare people but to work out how to live in that scenario, you are seen as expressing disloyalty to the system and hostility toward those who will suffer from that failure.

I suggested this could be an issue with my book Age of Em:

All of which seems bad news for my book, which mostly just accepts the “robots take over, humans lose wages and get sidelined” scenario and analyzes its consequences. No matter how good my reasons for thinking politics will fail to prevent this, many will react as did Nikola Danaylov, with outrage at my hostility toward the poor suffering losers.

This week I talked on my book to a sharp lively group organized by Azeem Azhar (author of the futurist newsletter Exponential View), and learned that this taboo may be worse than I thought. I tried to present the situation as something that you might consider to be a problem, but that while my analysis should enable better problem solving, I’ve personally focused on just describing this situation. Mixing up normative and positive discussions risks the positive being overshadowed by the normative, and positive claims seeming less reliable when mixed up with more disputable normative claims.

Even with this reframing, several people saw me as still violating the key taboo. Apparently it isn’t just taboo to assume that we’ll fail to solve a problem; it can also be taboo to merely describe a problem without recommending a solution. At least when the problem intersects with many strong feelings and moral norms. To many, neutral analysis just seems cold and uncaring, and suspiciously like evil.

GD Star Rating
Tagged as: ,
Trackback URL:
  • Johnicholas

    Will you write a normative sequel?

  • Matt M

    I think it’s scary for people to see a big brain offer a problem and no solution. They infer that the problem is difficult and therefore more likely to negatively impact them.

  • lump1

    I agree that you identified a strong and strange taboo. Your audiences seem to assume that since you’re not freaking about this future you predict, you must somehow be cool with it. Then they puzzle over what kind of a monstrous moral system you must have in order to be cool with the catastrophe you prophecize. Separating the descriptive from the normative is not easy in contexts like this.

    • Dániel

      I believe the audience’s conscious or subconscious reaction is this: If this man truly believes all this crazy shit, why isn’t he right now on a huge mission trying to alter the future and prevent his apocalyptic scenario from happening? Maybe he’s fine with this apocalypse? (Compare to Eliezer Yudkowsky: he identified an apocalyptic scenario he deemed high probability, and went on to dedicate his life to preventing it.)

  • Oliver

    A normative view requires making commitments to ethical systems, and this can be tricky as well.

    Is an em’s life worth living? Do we accept the repugnant conclusion? Are we okay with forced stagnation imposed by a surveillance state?

    No matter what you decide to answer here, someone is going to hate you for it.

  • free_agent

    Hmmm… Showing that you have thought in detail about the consequences of not-X happening implies that you’ve been making plans on how to reduce the cost of not-X to yourself, and that reduces your incentive to ensure X happens. This will apply double if you’re of the caste of people whose job is to solve society’s problems.

  • This is a pretty common sort of taboo. For example, religious people tend to think you are wicked if you point out arguments tending to suggest that their religion is false, unless you also give reasons for thinking those arguments are mistaken.

  • Dave Lindbergh

    Surprised and disappointed.

    I saw your recent talk at MIT (as you know); you were *extremely* clear about merely describing how the scenario plays out per accepted social science, and about *not* endorsing the outcome.

    That very effectively shut up the audience, who I think otherwise would have attacked.

    If you gave the same talk to these people, why did they react differently?

    • The MIT audience followed very academic norms, but not everyone is an academic.

      • Dave Lindbergh

        Which academic norm(s) do you think made the difference?

        Accepting your scope at face value absent evidence of insincerity?

        To me that’s just common courtesy.

        [Not an academic.]

  • Adam Casey

    Note this happens only if framed as policy relevant. We can all complain about the weather, we can all do detailed analysis of how many will die in a hurricane. But these are framed as “the way things are” rather than “observing a danger caused or at least not prevented by current policy”.

    Possible other case to consider: dystopia. If you wrote “the age of em” explicitly as a dystopia you’d be fine. The criticism of the system is enough to clearly inform what “side” you’re on. As it is you’re not taking a side, which implies backsliding and lack of reliability in the politics of ems.

  • Sixo

    You have just accurately categorized the entire controversy over global warming. Not that there’s a solution in that accurate description.

    • lump1

      That’s a useful comparison. Global warming is bad, and it’s coming, and these two things alone seem to be enough to trigger the “get outraged and signal that *something* must be done” response. As Robin noted, outrage-free descriptions of bad news about the future (climate, society, etc.) are taboo.

      This urgent compulsion to do *something* means that psychological forces push us towards action before we check whether that action would be effective. Our agenda isn’t simply to prevent the bad future. Even if we suspect that it’s inevitable, we want to make a show of fighting it anyway, so that we can’t be retrospectively blamed for having stood idly by as the (inevitable) bad stuff unfolded. That’s clearly irrational, but it’s how we think, and this kind of bias should have a name.

      It’s related to our double standard about predictions for purely mechanical systems, and predictions for systems that involve human decisions. Very reliable predictions can be made about both kinds of systems, but we’re only comfortable with thinking of the first kind of predictions as inevitable. The sun will inevitably increase its output: fine. Humans will inevitably increase CO2 levels: heresy. It’s as if we were fundamentally skeptical about the causal modelability of the human realm.

  • Perhaps confrontation with your vision provides a tempting opportunity for virtue signaling.

  • arch1

    Robin, the below analogy may suffer from my not yet having
    read the book; sorry if so-

    If one of the very rare people with some understanding of disease X (a) convinced you that you and your loved ones were at significant risk of contracting X, (b) provided a vivid detailed explanation as to how X could wreak permanent havoc on your bodies and your lives, and (c) had no inclination whatever to discuss X’s mitigation or avoidance, as just now their interest lay in objectively describing X,

    ..wouldn’t *you* be frustrated, maybe even visibly so, and perhaps also a bit puzzled if the expert walked away characterizing your evident frustration as the manifestation of a hitherto unrecognized taboo?

    *I* would.

    • Faze

      This is what neurologists do every day. They have to diagnose diseases like Parkinson’s, Alzheimer’s and ALS, without being able to offer patients any kind of effective treatment. They can only offer suggestions on how to live with the disease and its disabilities.

  • Michael Wiebe

    There’s a reason books end with a call to action in the final chapter.

  • AJ Li

    There’s a certain irony in how you’re presenting a neutral analysis of the problem in this blog post without presenting a solution.

    That said, could it be as simple as people on average simply disliking pessimistic views and prefering optimistic ones?

  • Rod Berne

    Absolutely correct.

    I recall when Robert Greene wrote the 48 Laws of Power, he was attacked for simply documenting patterns of power in historical periods and modern times. His sin was only to relay that knowledge to the general public in a cold fashion.

  • Reminds me of , where Andrew Gelman and a number of his commenters were shocked and appalled for someone applying economic analysis to torture.

  • I’ve also seen the opposite reaction: rather than assuming that a neutral analysis of a phenomenon is endorsing it, people sometimes assume that the neutral analysis is complaining about it.

    E.g. reactions to some analyses by male authors about how dating markets are more difficult for men, were interpreted as “whining” even when they made no normative claims.

  • PlainDealingVillain

    This wouldn’t be a concern if they met you. Then they would know for sure that you are in favor of the outcome and can be relied on to frame everything to put it in a positive light.

  • Ted

    You know what comes off as really evil? Writing a series of posts about how things you do, contrary to appearances, aren’t really evil at all.

    • arch1

      Sheesh, tough crowd. On a Friday yet. My comment below arguably qualifies me as a non-acolyte, and I’m not aware of anything Robin’s done in the current context which could remotely be considered evil. Quite the contrary (assuming his book’s anything like his blog entries on ems) – he’s shed a lot of light on a very important topic.

  • Jacob Egner

    Do you view this taboo as yet another example/consequence of how evolution has selected genes that that lead to winning tribal coalition power struggles more than it has selected genes that lead to accurately determining abstract truths?

    Thus, humans have been “designed” to view any discussion, even a discussion of pure fact, in the context of how it would effect the balance of power between factions.

    • Thus, humans have been “designed” to view any discussion, even a discussion of pure fact, in the context of how it would effect the balance of power between factions.

      Because of the way we’re designed, any intellectual discussion does implicate factional commitments. But it’s intellectually dishonest to avoid the factual claims because of proponents’ ulterior motives.

      But there’s another kind of dishonesty which comes with denying such motives. Or in claiming that those motives aren’t ever a fitting topic for discussion.

  • free_agent

    I ran into this in Sunday’s Boston Globe:

    In some situations, like emergencies, it really helps to have backup plans. But if the situation doesn’t require it, a backup plan can actually be detrimental, according to new research. In several experiments, people were given a mental task, with the prospect of a reward for high performance. Before starting the task, some of these people were asked to think about how, if they didn’t get the reward, they could obtain something comparable. These people subsequently performed worse on the mental task. The effect appears to be explained by lower motivation, not distraction.

    Shin, J. & Milkman, K., “How Backup Plans Can Harm Goal Pursuit: The Unexpected Downside of Being Prepared for Failure,” Organizational Behavior and Human Decision Processes (July 2016).

  • kurt9

    Taboos at considering reality and how to deal with it are utterly silly. Reality exists and reality is that which continues to exist even when you stop believing in it.

    • Yeah, but you might not continue to keep existing if you violate your society’s taboos.

  • Pingback: Overcoming Bias : Future Gender Is Far()