Tag Archives: Abstraction

The InDirect-Check Sweet Spot

I have specialized somewhat in being a generalist intellectual. I know of two key strategies for pursuing this. The first one is pretty obvious, but still important: learn the basics of many different fields. The more fields you know, the more chances you will find to apply an insight in one field into another. So not only learn many fields, but keep looking for connections between them. That is, keep searching for ways to apply the insights in all the fields you know to all the other fields you know.

The second strategy is a bit less obvious. And that is to work hard to collect indirect tests and checks of everything you know. This doesn’t tend to happen naturally, because we mostly tend to learn only very direct tests of what we know.

Consider someone writing an oped. With experience, an oped writer will learn in great detail the emotion tones hit by each thing they might say. So they will learn to say things in ways that hit the right tones the right way at the right times. These are relatively direct tests, but not of the literal truth of each thing said. Instead these are tests of how people will react to things said.

Now consider someone writing code that is close to a user interface. In this sort of context, usually the only ways that the code can be wrong is to fail to give the proper appearances to users. If the system looks right to users, then for the most part it just is right, as there are few concepts of hidden mistakes or errors at this level.

In contrast, consider someone trying to create a computer simulation of a particular scientific model. This simulation could in fact be wrong, even though users don’t see any obvious mistakes. When you learn to write code like this, you have to learn to collect more ways to check your code, to look for errors. At least you do if you expect errors to eventually be discovered, but that it works out much better for you if you find such errors early, yourself, rather than that they be found by others, later.

Similarly, if you want to have your best shot at being a productive generalists, you should be collecting as many ways as possible to check each hypothesis or claim you might come across against all of the other things you know. If this sort of thing were true, then we should expect to see that sort of pattern.

You see, when you try to apply insights from some fields to other distantly related fields, most of the ideas you will come up with won’t be that easy to test or check directly. So if you are to have much of a chance of finding good applications, you’ll need to collect a big toolkit of ways to devise sanity checks that you can apply.

In contrast, most fields don’t really offer very strong incentives to collect indirect tests. Many fields clearly telegraph the conclusions you are supposed to reach, making it easy to check if your conclusions are among the desire ones. In many other fields, such as in writing fiction or sermons, one can test the quality of work relatively directly against how it seems to effect readers. They don’t care much there about any truth beyond created the desired effects in readers.

But when you think about each new field you explore, it will be healthy if you fear the possibility that you will draw a tentative conclusion that will later turn out to look pretty wrong. This will push you to search for many different ways to check each hypothesis, to avoid such scenarios. You may well need to imagine that you will face different critical audiences than the people in those fields, as they may well not really care so much about such global consistency. But you need to, if you would learn to be a productive generalist.

GD Star Rating
Tagged as: ,

Abstract Views Are Coming

Two years ago I predicted that the future will eventually take a long view:

If competition continues for long enough, eventually our world should contain units that do care about the distant future, and are capable of planning effectively over long timescales. And eventually these units should have enough of a competitive advantage to dominate. … The future not being neglected seems such a wonderfully good outcome that I’m tempted to call the “Long View Day” when this starts one of the most important future dates.

Today I predict that the future will also eventually take a more abstract view, also to its benefit. Let me explain.

Recently I posted on how while we don’t have a world government today, we do now have a close substitute: a strong culture of oft-talking world elites, that can and does successfully pressure authorities everywhere to adopt their consensus regulation opinions. This is much like how in forager bands, the prestigious would gossip to form a consensus plan, which everyone would follow.

This “world forager elite”, as I called them, includes experts, but often overrules them in their areas of expertise. And on the many topics for which this elite doesn’t bother to form a consensus, other institutions and powers are allowed to made key decisions.

The quality of their judgements depends on how able and knowledgeable is this global elite, and on how long and carefully they deliberate on each topic. And these parameters are in turn influenced by the types of topics on which they choose to have opinions, and on how thinly they spread themselves across the many topics they consider.

And this is where abstraction has great potential. For example, in order of increasing generality these elites could form opinions on the particular kinds of straws offered in a particular London restaurant, or on plastic straws in general at all restaurants, or on all kinds of straws used everywhere, on how to set taxes and subsidies for plastic and paper for all food use, or on how to set policy on all plastic and paper taxes and subsidies.

The higher they go up this abstraction ladder, they more that elites can economize on their efforts, to deal with many issues all at once. Yes, it can take more work to reason more abstractly, and there can be more ways to go wrong. And it often helps to first think about concrete examples, and then try to generalize to more abstract conclusions. But abstraction also helps to avoid biases that push us toward arbitrarily treat fundamentally similar things differently. And abstraction can better encompass indirect effects often neglected by concrete analysis. It is certainly my long experience as a social scientist and intellectual that abstraction often pays huge dividends.

So why don’t elites reason more abstractly now? Because they are mostly amateurs who do not understand most topics well enough to abstract them. And because they tend to focus on topics with strong moral colors, for which there is often an expectation of “automatic norms”, wherein we are just supposed to intuit norms without too much explicit analysis.

In the future, I expect us to have smarter better-trained better-selected elites (such as ems), who thus know more basics of more different fields, and are more able to reason abstractly about them. This has been the long term historical trend. Instead of thinking concrete issues through for themselves, and then overruling experts when they disagree, elites are more likely to focus on how manage experts and give them better incentives, so they can instead trust expert judgements. This should produce better judgements about what to regulate how, and what to leave alone how.

The future will take longer, and more abstract, views. And thus make more sensible decisions. Finally.

GD Star Rating
Tagged as: , , ,

A Perfect Storm of Inflexibility

Most biological species specialize for particular ecological niches. But some species are generalists, “specializing” in doing acceptably well in a wider range of niches, and thus also in rapidly changing niches. Generalist species tend to be more successful at generating descendant species. Humans are such a generalist species, in part via our unusual intelligence.

Today, firms in rapidly changing environments focus more on generality and flexibility. For example, CEO Andy Grove focused on making Intel flexible:

In Only the Paranoid Survive, Grove reveals his strategy for measuring the nightmare moment every leader dreads–when massive change occurs and a company must, virtually overnight, adapt or fall by the wayside–in a new way.

A focus on flexibility is part of why tech firms tend more often to colonize other industries today, rather than vice versa.

War is an environment that especially rewards generality and flexibility. “No plan survives contact with the enemy,” they say. Militaries often lose by preparing too well for the last war, and not adapting flexibly enough to new context. We usually pay extra for military equipment that can function in a wider range of environments, and train soldiers for a wider range of scenarios than we train most workers.

Centralized control has many costs, but one of its benefits is that it promotes rapid thoughtful coordination. Which is why most wars are run from a center.

Familiar social institutions tend to be run by those who have run parts of them well recently. As a result, long periods of peace and stability tend to promote specialists, who have learned well how to win within a relatively narrow range of situations. And those people tend to change our rules and habits to suit themselves.

Thus rule and habit changes tend to improve performance for rulers and their allies within the usual situations, often at the expense of flexibility for a wider range of situations. As a result, long periods of peace and stability tend to produce fragility, making us more vulnerable to big sudden changes. This is in part why software rots, and why institutions rot as well. (Generality is also often just more expensive.)

Through most of the farming era, war was the main driver pushing generality and flexibility. Societies that became too specialized and fragile lost the next big war, and were replaced by more flexible competitors. Revolutions and pandemics also contributed.

As the West has been peaceful and stable for a long time now, alas we must expect that our institutions and culture have been becoming more fragile, and more vulnerable to big unexpected crises. Such as this current pandemic. And in fact the East, which has been adapting to a lot more changes over the last few decades, including similar pandemics, has been more flexible, and is doing better. Being more authoritarian and communitarian also helps, as it tends to help in war-like times.

In addition to these two considerations, longer peace/stability and more democracy, we have two more reasons to expect problems with inflexibility in this crisis. The first is that medical experts tend to think less generally. To put it bluntly, most are bad at abstraction. I first noticed this when I was a RWJF social science health policy scholar, and under an exchange program I went to the RWJF medical science health policy scholar conference.

Biomed scholars are amazing in managing enormous masses of details, and bringing up just the right examples for any one situation. But most find it hard to think about probabilities, cost-benefit tradeoffs, etc. In my standard talk on my book Age of Em, I show this graph of the main academic fields, highlighting the fields I’ve studied:

Academia is a ring of fields where all the abstract ones are on one side, far from the detail-oriented biomed fields on the other side. (I’m good at and love abstractions, but have have limited tolerance or ability for mastering masses of details.) So to the extent pandemic policy is driven by biomed academics, don’t expect it to be very flexible or abstractly reasoned. And my personal observation is that, of the people I’ve seen who have had insightful things to say recently about this pandemic, most are relatively flexible and abstract polymaths and generalists, not lost-in-the-weeds biomed experts.

The other reason to expect a problem with flexibility in responding to this pandemic is: many of the most interesting solutions seem blocked by ethics-driven medical regulations. As communities have strong needs to share ethical norms, and most people aren’t very good at abstraction, ethical norms tend to be expressed relatively concretely. Which makes it hard to change them when circumstances change rapidly. Furthermore we actually tend to punish the exceptional people who reason more abstractly about ethics, as we don’t trust them to have the right feelings.

Now humans do seem to have a special wartime ethics, which is more abstract and flexible. But we are quite reluctant to invoke that without war, even if millions seem likely to die in a pandemic. If billions seemed likely to die, maybe we would. We instead seem inclined to invoke the familiar medical ethics norm of “pay any cost to save lives”, which has pushed us into apparently endless and terribly expensive lockdowns, which may well end up doing more damage than the virus. And which may not actually prevent most from getting infected, leading to a near worst possible outcome. In which we would pay a terrible cost for our med ethics inflexibility.

When a sudden crisis appears, I suspect that generalists tend to know that this is a potential time for them to shine, and many of them put much effort into seeing if they can win respect by using their generality to help. But I expect that the usual rulers and experts, who have specialized in the usual ways of doing things, are well aware of this possibility, and try all the harder to close ranks, shutting out generalists. And much of the public seems inclined to support them. In the last few weeks, I’ve heard far more people say “don’t speak on pandemic policy this unless you have a biomed Ph.D”, than I’ve ever in my lifetime heard people say “don’t speak on econ policy without an econ Ph.D.” (And the study of pandemics is obviously a combination of medical and social science topics; social scientists have much relevant expertise.)

The most likely scenario is that we will muddle through without actually learning to be more flexible and reason more generally; the usual experts and rulers will maintain control, and insist on all the usual rules and habits, even if they don’t work well in this situation. There are enough other things and people to blame that our inflexibility won’t get the blame it should.

But there are some more extreme scenarios here where things get very bad, and then some people somewhere are seen to win by thinking and acting more generally and flexibly. In those scenarios, maybe we do learn some key lessons, and maybe some polymath generalists do gain some well-deserved glory. Scenarios where this perfect storm of inflexibility washes away some of our long-ossified systems. A dark cloud’s silver lining.

GD Star Rating
Tagged as: , , , ,