Recently I posted on how many seek spiritual insight via cutting the tendency of their minds to wander, yet some like Scott Alexandar fear ems with a reduced tendency to mind wandering because they’d have less moral value. On twitter Scott clarified that he doesn’t mind modest cuts in mind wandering; what he fears is extreme cuts. And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.
For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.
On nature preserves, some fear eventually losing all of wild nature, but when arguing for any particular development others say we need new things and we still have plenty of nature. On military spending, some say the world is peaceful and we have many things we’d rather spend money on, while others say that societies who do not remain militarily vigilant are eventually conquered. On increasing inequality some say that high enough inequality must eventually result in inadequate human capital investments and destructive revolutions, while others say there’s little prospect of revolution now and inequality has historically only fallen much in big disasters such as famine, war, and state collapse. On value drift, some say it seems right to let each new generation choose its values, while others say a random walk in values across generations must eventually drift very far from current values.
If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.
But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?
First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.
Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.
Third, our ability to foresee the future rapidly declines with time. The more other things that may happen between today and some future date, the harder it is to foresee what may happen at that future date. We should be increasingly careful about the inferences we draw about longer terms.
Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.
Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.
We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.
And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.
"Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns."
I doubt the sincerity of this leap of logic.
"Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems."
Funny how the implicit but clearly false universal quantifier is ignored for the sake of the conclusion. Eating one delicious morsel is good but eating a pantryful is not. And similar issues of quantity are relevant to Robin's examples, not the "short term" vs. "long term" of his disingenuous logical shell game.
"For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch."
No, larger changes are *more dangerous*, so larger changes *without negative consequences* are increasingly unlikely.
"Third, our ability to foresee the future rapidly declines with time. The more other things that may happen between today and some future date, the harder it is to foresee what may happen at that future date. We should be increasingly careful about the inferences we draw about longer terms."
This is the intellectually dishonest argument that climate science deniers use ... but the correct analysis is that more uncertainty calls for more caution, not less.
"For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch."
No, larger changes *without error* are increasingly difficult. Larger changes *per se* are not at all impossible.
"Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy."
Another nutty leap of logic, and of course a strawman. One firm taking over an industry is bad enough. And if it's a critical industry, it can certainly have major consequences for the entire economy.
"Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes."
I wonder whether, when Robin wrote this, he remembered the previous time, oh so long ago, that he used that word ...
"It shouldn’t be sufficient to just point to the possibility of such problems."
Willard Quine on such arguments:
https://books.google.com/bo...
I guess you're just bowing to political reality when you propose limiting the discussion to times that people actually care about. It is frustratingly hard to argue against this, even wrt potentially apocalyptic scenarios.
In the interest of nudging peoples' planning horizon a bit, I *will* point out that even people who care only about their grandchildrens' generation should be looking very far out, for two reasons: 1) their grandchildren could live a very long time, 2) caring presumably means caring about their grandkids' well-being (not just their survival), so they should consider that those grandchildren will have their own (likely much longer) planning horizons.