The ‘What If Failure?’ Taboo

Last night I heard a  group of smart pundits and wonks discuss Tyler Cowen’s new book Average Is Over. This book is a sequel to his last, The Great Stagnation, where he argued that wage inequality has greatly increased in rich nations over the last forty years, and especially in the last fifteen years. In this new book, Tyler says this trend will continue for the next twenty years, and offers practical advice on how to personally navigate this new world.

Now while I’ve criticized Tyler for overemphasizing automation as a cause of this increased wage inequality, I agree that most of the trends he discusses are real, and most of his practical advice is sound. But I can also see reasonable grounds to dispute this, and I expected the pundits/wonks to join me in debating that. So I was surprised to see the discussion focus overwhelmingly on if this increased inequality was acceptable. Didn’t Tyler understand that losers might be unhappy, and push the political system toward redistribution and instability?

Tyler quite reasonably said yes this change might not be good overall, and yes there might well be more redistribution, but it wouldn’t change the overall inequality much. He pointed out that most losers might be pretty happy with new ways to enjoy more free time, that our last peak of instability was in the 60’s when inequality was at a minimum, that since we have mostly accepted increased inequality for forty years it is reasonable to expect that to continue for another twenty, and that over history inequality has had only a weak correlation with redistribution and instability.

None of which seemed to dent the pundit/wonk mood. They seemed to hold fast to a simple moral principle: when a future change is framed as a problem which we might hope our political system to solve, then the only acceptable reason to talk about the consequences of failing to solve that problem is to scare folks into trying harder to solve it. If you instead assume that politics will fail to solve the problem, and analyze the consequences of that in more detail, not to scare people but to work out how to live in that scenario, you are seen as expressing disloyalty to the system and hostility toward those who will suffer from that failure.

I think we see something similar with other trends framed as negatives, like global warming, bigger orgs, or increased regulation. Once such a trend is framed as an official bad thing which public policy might conceivably reduce, it becomes (mildly) taboo to seem to just accept the change and analyze how to deal with its consequences.

All of which seems bad news for my book, which mostly just accepts the “robots take over, humans lose wages and get sidelined” scenario and analyzes its consequences. No matter how good my reasons for thinking politics will fail to prevent this, many will react as did Nikola Danaylov, with outrage at my hostility toward the poor suffering losers.

GD Star Rating
Tagged as: , , , ,
Trackback URL:
  • praxeologue

    I suspect people in the 80s would have been fretting about Japanese automation replacing humans and….. oh hang on, they did…

    • Marcus Tullius

      Who says they didn’t? Car manufacturers use much fewer auto workers than they used to because of Japanese manufactured robots.

      • praxeologue

        Yes but it didn’t lead to mass unemployment except in that industry.. just because we can’t see today what the demands of the future will be doesn’t mean therefore there will be no jobs for those displaced by technology. It is an argument the Luddites used… ‘so who will employ those displaced by the looms’ and equally as daft.

      • IMASBA

        Automation does add to structural unemployment, with a factor that’s proportional to the RATE of automation in the recent past. Unemployment goes up if the rate of automation increases.

        This has real world consequences because it creates the need for a safety net of unemployment benefits and retraining (this is far cheaper than the damage done by having huge manpower shortages in every field that is likely to become automated in the next 40 years which would occur if people didn’t go into fields that risk becoming automated during their working lives). There is also the risk of the gains of automation not trickling down through society meaning those who own the machines can force the unemployed to eventually work for subsistence level wages.

        People aren’t averages and they don’t get free manna from the heavens during the years it takes long term macro economic trends to do their work.

      • praxeologue

        Oh boy. An unreconstructed Marxist. Imagine you were alive at time of Luddites… I suspect you’d have been telling someone like me we has to smash those machines because what would replace those jobs… And as for capitalists keeping the labourers on subsistence wages… Well..,Marx forecast that as well as many other things that have been completely wrong but there’s no convincing so e people. Equality of outcome above all else.

      • IMASBA

        The luddites believed the extra structural unemployment caused by automation would pile up, so proportional to the total amount of automation in history. If you do not recognize the difference between that belief and what I said above (for pete sake I even put the word “rate” in boldface so you couldn’t miss it). You probably don’t know what a Marxist is either (hint: I’m not one).

      • IAMSBA

        Capital letters, bold face doesn’t seem to work for non-registered commenters.

  • Dean Jens

    Politicians, especially executives-in-chief, seem sometimes to take pride in appearing not to have planned for bad contingencies; I’m thinking of recent assertions by the Obama administration, when asked what they would do if Congress didn’t authorize strikes in Syria, that this wouldn’t happen, and similar instances by previous administration of refusing to take seriously questions about contingency planning, brushing them aside with “that’s not going to happen”. This seems in some ways contradictory to what you observe, and in other ways emblematic of it; it’s okay, perhaps, to talk about a future problem if you have a potential solution to it, but acknowledging potential problems with the solution (especially if you lack solutions to them) aren’t okay. (Here a “solution” is required to be something system-wide, not how individuals cope in a new environment.)

    • IMASBA

      Politicians often believe publicly announcing you put a lot of effort into contingency plans in case of the failure of your own projects means the public perceives you as weak which gives your rivals ammunition to make your projects fail (self-fulfilling prophecy). This theory has been fed to them by every PR person that ever worked for them. Of course politicians often do secretly work on contingency plans and sometimes, like in war, or during economic upheaval it really is true that a country can fall if its leaders do not appear confident enough.

  • Daublin

    Yes! It is an aggravating tendency. When it comes to public policy, it is the norm rather than the exception that it’s better to leave matters alone.

    I like to draw a comparison to neighborhood associations. With these small governments, people intuitively understand that everyone involved is likely to be a busybody and is unlikely to improve the majority of problems they might address. One can then consider what happens at scale that makes it any better.

    I do think you could use a catchier description than the “what if taboo”.

    For one, I am tempted to say it is more a “fallacy” than a “taboo”. Granted, in some cases–such as global warming–it really is a taboo.

    “What if” doesn’t say anything to me. I don’t understand why you call it that.

    Using “wonk” in the name would be helpful. A possible alternative would be “problem solver”.

    Also, there is some related work:

    – “The cure is worse than the disease.” I believe this is close to what you are looking for.

    – “If it ain’t broke, don’t fix it.” This is also close, but it would be better to generalize it to, “if it ain’t broke *enough*”.

    – Arnold Kling’s “Markets fail. Use markets”. It’s the same idea but applied specifically to markets. Unfortunately, I am not sure that many people outside of GMU follow the exact chain of logic that is implied by this slogan.

  • Blunt_Instrument

    Is inequality somehow “unnatural”? History seems to show that increasing societal wealth correlates with increasing inequality. Perhaps the mid-twentieth century (America) model of high societal wealth and relatively low inequality was an aberration and we are now returning to the “natural” state of wealthy societies.

    • IMASBA

      Why does it matter if it’s “unnatural” or not? If we have the means to create sustainable low inequality and most people would then be better off, while the rest would still do OK, is that not all that matters?

  • Stridulator

    The “assume system failure” mindset may be taboo in the context you have framed it (wherein signaling a desire to solve a problem is more important than being right about whether it can practically be solved) but it doesn’t seem to me that mindset is shared amongst ordinary, rational agents. Consider: “It’s easier to ask forgiveness than it is to get permission” (i.e., norm enforcers / systems do not wield the power they claim, go ahead)

    Norm/rule-breaking behaviors may still be (mildly) taboo, as you put it, but subversion and hustle are two pillars of American culture, albethey off-white in color.


    “If you instead assume that politics will fail to solve the problem, and analyze the consequences of that in more detail, not to scare people but to work out how to live in that scenario, you are seen as expressing disloyalty to the system and hostility toward those who will suffer from that failure.”

    Making contingency plans in case of failure is rational on a personal level, but only if it doesn’t draw away significant resources from preventing the failure in the first place, and they don’t have to be physical resources, morale is also a resource an organization needs to get things done. You have to prevent disbelief in success becoming a self-fulfilling prophecy. It’s really no different than bankruns: withdrawing your money from the bank is sometimes rational on a personal level but you’re not doing the world a favor by advising everyone to withdraw their money from the banks, you’re exaggerating a problem that might otherwise be solved with a relatively small portion of the world’s resources. For tackling climate change it’s even worse: preparing for failure in that department means allocating vast quantities of physical resources and thus interferes with attempts to slow/halt climate change in a very physical manner. Another example would be soldiers fighting a battle: if the individual soldiers waste time planning for their own escape while the others fight on (or plan to survive increasing inequality by ripping off their equally poor neighbors) then the battle is surely lost before the first shot has been fired. It’s classic divide and conquer.

    Humans (and probably not only humans) need trust in each other and in ideas to change things, or even to get out of bed in the morning into the freezing cold (something utilitarians often have to find out the hard way: you can scam, cheat and elbow your way through life and in the end you may sit on a pile of gold but you’ll find you can’t enjoy that gold because what motivates you to live at all when you believe in nothing and when no one cares about you?) That trust can be easily broken and then society stops dead in its tracks, unable to solve macro problems and you again get a self-fulfilling prophecy.

  • Pingback: Assorted links()

  • TheBrett

    I think Cowen under-estimates the potential for backlash to the “all measurement and precision all the time” thing he talks about employees facing in his book. We went down an earlier version of this with Frederick Taylor’s “Scientific Management” system in the early 20th century, and it provoked considerable labor unrest.

    • Fordism might have been too early for its own good. This was a time when labourers would protest having month long contracts because they wanted to get paid by the day.

    • George

      Should said individual’s children also receive a guaranteed basic income? If not, why not? If so, then at what point do you prevent future overpopulation from exhausting local land and resources? Future abundance will still be scarce given exponential population growth.

      • TheBrett

        Why not? Just say that every citizen of the US gets a guaranteed basic income amount upon age 18 per year for the rest of your life.

        We have mostly declining birth rates worldwide, and the few that aren’t declining anymore are just bouncing back from birth rates below the replacement level. I’m not worried about that.

      • Anonymous

        If so, then at what point do you prevent future overpopulation from exhausting local land and resources?
        You don’t. If overpopulation really happens, there will be a point when the redistribution scheme becomes unsustainable because you can’t tax people arbitrarily highly. Until then, you gain the temporary gains of political stability.

      • IMASBA

        If the population grows faster than economic output does the guaranteed basic income per capita would have to go down. Besides, thinking that giving poor people money will lead to a population explosion is a 19th century idea that was decisively disproven by experience in the late 20th and 21st centuries. Turns out women don’t really want 15 children and people like the idea of contraception, of course economists could have figured that out back in the 19th century if they had done actual empirical research and had considered women as people instead of cattle, but then economists would have been real scientists… But let’s just say the 19th century idea was right, in that case the lowering of the guaranteed basic income would have led to less of an incentive to have more kids and eventually an equilibrium would have been established.

    • Give us a true safety net in the form of a guaranteed basic income that puts an individual at the point where they could live an austere lifestyle but not go homeless or starve off of it

      What if someone has more offspring than they can feed on their guaranteed income? (One purpose of a safety net is to protect from one’s own imprudence, particularly when it harms others like children.)

      • TheBrett

        Not sure on that one. You could do an add-on if they report up to two dependents, and then child services if they have a bunch of kids they can’t feed that are going hungry.

  • No matter how good my reasons for thinking politics will fail to prevent this, many will react as did Nikola Danaylov, with outrage at my hostility toward the poor suffering losers

    I think this complaint is a tad disingenuous (as would be a similar complaint if registered by Tyler Cowan).

    Both of you could avoid this reaction with a simple expedient that would reduce the signaling value of moralistic disagreement. You could state unambiguously (with evident concern about being misunderstood) that you detest this kind of inequality, truly hope it can be avoided, but frankly think avoidance is impossible. You could also say that, despite the apparent inevitability, you hope your analysis will lead someone else to see a way to avert it.

    But you don’t (I don’t think). In the absence of such disclaimers, you are tacitly saying that you approve. (And you do!)

    Do you really not understand that claiming something is inevitable is a powerful way of proclaiming support (because people want to be on the winning side) and a sometimes effective way to bring something about?

    • IMASBA

      Yeah, those are good points, especially in politics not explicitly stating your discomfort with an event that you see as hard (or impossible) to prevent can be construed as support for that event. We have generations of word twisters, who used the inevitability argument to further their own goals, to thank for that.

      So yes, strictly speaking saying something is inevitable does not have to signal support, and maybe it doesn’t on some far flung alien world, but here on Earth people are used to the inevitability argument being used to disguise support.

    • Are there no levels of dislike between “detest” and not disliking at all? Must one “truly” hope as opposed to just hope? Don’t you see that you are taking the absence of extreme language as indicating the opposite preference, and hence not allowing for any intermediate positions?

      • Can a utilitarian hold an intermediate position on the doctrine’s summative and additive versions? ( ) Readers will either love or hate em society. They can’t conceive of (I can’t conceive of) how you might arrive at a mildly adverse moral evaluation.

      • IMASBA

        The world has a history: things that logically don’t have to be still are because of historic outcomes. There is no logical connection between elephants and American conservatism, but the republican party did choose an elephant as their mascotte so in our world an elephant does signify American conservatism.

  • ThaomasH

    The proper response to a negative trend is to weight the costs and benefits of doing something and not. Doing too much and too little ought to be equally taboo.

  • lemmycaution

    Peter Tauchin thinks that inequality moves in cycles. Inequality building to a crisis until workers get the power to reduce inequality such as by restricting immigration or redistribution through taxes.

    That seems reasonable to me so I think the the press questions are reasonable.

  • vaniver

    Specific global warming example: I hear that for many years, people who published on geoengineering were considered traitors because they reduced the perceived importance of preventing climate change by discussing ways to adapt to climate change. (Now that it seems clear that preventing climate change is a lost cause politically, they are mostly accepted again because adaptation is now clearly necessary.)

    • Global warming is completely different (pace Robin).

      In that issue, there are (perceived to be) actual doable things to slow it. Certainly, improving the ways people adapt to it will reduce the incentive to avoid it.

      • vaniver

        It’s not clear to me how that makes it different. Adapting to robots winning, say, will also reduce the incentive to avoid robots winning.

        (It also assumes that the best coping method is prevention, rather than adaptation, which is hard to tell if you haven’t seriously considered adaptation.)

      • It’s not clear to me how that makes it different. Adapting to robots winning, say, will also reduce the incentive to avoid robots winning.

        Strictly, the way it’s different is that it can’t be explained by the logic of opposed moral positions.

      • IMASBA

        There is no coping: it’s death/enslavement at the hand of the robots/ems, or prevention. You don’t need to weigh a lot of pros and cons…

      • Alexander Gabriel

        Your first sentence I think is basically right.

        But it seems good to note that there are singularity scenarios, which we have no reason to dismiss, where we survive. It’s a coin toss. So it may make sense to split resources between coping for a “survive” toss and trying to prevent the toss, even if we can’t change the coin’s odds.

    • Alexander Gabriel

      For global warming that sounds more plausible, but I’m not convinced it’s true with a singularity, because the limiting factor there may be awareness and not desire. If that’s true then individual action and political policy could actually be mutually reinforcing.

  • Pingback: Assorted Links | azmytheconomics()

  • As one economist once told me, “if there is no solution, there is no problem”

  • Alexander Gabriel

    With only a small minority aware of a singularity, political action can’t happen now, only individual planning.

    That said, I don’t really understand your reasons for saying politics can’t prevent things even in the longer run. We might say there are three possible failure points for politics, namely forecasting, inclination, and trust. Since you are forecasting a singularity, it doesn’t seem the first. Do you predict an outright lack of desire by individual governments, or just a lack of trust between nations leading to coordination failure? So with nuclear weapons we obviously have the latter, since nobody would develop those except for foreign threats. On the other hand, the internet (rightly) falls into the former category.

  • Pingback: The Toady Class On Average is Over | askblog()

  • Pingback: “We” didn’t want a lot of things | Sandor at the Zoo()

  • Philon

    “If you instead assume that politics will fail to solve the problem, and analyze the consequences of that in more detail, not to scare
    people but to work out how to live in that scenario, you are seen as expressing disloyalty to the system and hostility toward those who will suffer from that failure.” This negative attitude toward you would best be rationalized by appeal to the EPH—the Efficient Politics Hypothesis, analogous to the Efficient Market Hypothesis. According to the EPH, if there is a political problem that can be solved, the system will work: it *will* be solved, incorporating all available information into the political process that generates the solution. (Additional assumption: virtually all political problems, at least those that are foreseen, can be solved.)

    But of course the EPH, once formulated, looks ‘way too

  • Pingback: Browsing Catharsis – 10.01.13 | Increasing Marginal Utility()

  • Pingback: Narrow Argument | pgbh()

  • Pingback: Average is Over could use more focus on the Zero Marginal Cost Economics of software | Praxtime()

  • Pingback: Você poderá acessar as redes sociais no seu emprego do futuro em uma economia da máquina inteligente? | Tabula (não) Rasa & Libertarianismo Bleeding Heart()

  • Pingback: Overcoming Bias : Tax Coastal Cities?()

  • Pingback: Overcoming Bias : Problem, No Solution Taboo?()

  • Pingback: Overcoming Bias : Future Gender Is Far()