Tag Archives: Abstraction

How Group Minds Differ

We humans have remarkable minds, minds more capable in many ways that in any other animal, or any artificial system so far created. Many give a lot of thought to the more capable artificial “super-intelligences” that we will likely create someday. But I’m more interested now in the “super-intelligences” that we already have: group minds.

Today, groups of humans together form larger minds that are in many ways more capable than individual minds. In fact, the human mind evolved mainly to function well in bands of 20-50 foragers, who lived closely for many years. And today the seven billion of us are clumped together in many ways into all sorts of group minds.

Consider a four-way classification:

  1. Natural – The many complex mechanisms we inherit from our forager ancestors enable us to fluidly and effectively manage small tightly-interacting group minds without much formal organization.
  2. Formal – The formal structures of standard organizations (i.e., those with “org charts”) allow much larger group minds for firms, clubs, and governments.
  3. Mobs = Loose informal communities structured mainly by simple gossip and status, sometimes called “mobs”, often form group minds on vast, even global, scales.
  4. Special – Specialized communities like academic disciplines can often form group minds on particular topics using less structure.

A quick web search finds that many embrace the basic concept of group minds, but I found few directly addressing this very basic question: how do group minds tend to differ from individual human minds? The answer to this seems useful in imagining futures where group minds matter even more than today.

In fact, future artificial minds are likely to be created and regulated by group minds, and in their own image, just as the modularity structure of software today usually reflects the organization structure of the group that made it. The main limit to getting better artificial minds later might be in getting better group minds before then.

So, how do group minds differ from individual minds? I can see several ways. One obvious difference is that, while human brains are very parallel computers, when humans reason consciously, we tend to reason sequentially. In contrast, large group minds mostly reason in parallel. This can make it a little harder to find out what they think at any one time.

Another difference is that while human brains are organized according to levels of abstraction, and devote roughly similar resources to different abstraction levels, standard formal organizations devote far fewer resources to higher levels of abstraction. It is hard to tell if mobs also suffer a similar abstract-reasoning deficit.

As mobs lack centralized coordination, it is much harder to have a discussion with a mob, or to persuade a mob to change its mind. It is hard to ask a mob to consider a particular case or argument. And it is especially hard to have a Socratic dialogue with a mob, wherein you ask it questions and try to get it to admit that different answers it has given contradict each other.

As individuals in mobs have weaker incentives regarding accuracy, mobs try less hard to get their beliefs right. Individual in mobs instead have stronger incentives to look good and loyal to other mob members. So mobs are rationally irrational in elections, and we created law to avoid the rush-to-judgment failures of mobs. As a result, mobs more easily get stuck on particular socially-desirable beliefs.

When each person in the mob wants to show their allegiance and wisdom by backing a party line, it is harder for such a mob to give much thought to the possibility that its party line might be wrong. Individual humans, in contrast, are better able to systematically consider how they might be wrong. Such thoughts more often actually induce them to change their minds.

Compared to mobs, standard formal orgs are at least able to have discussions, engage arguments, and consider that they might be wrong. However, as these happen mostly via the support of top org people, and few people are near that top, this conversation capacity is quite limited compared to that of individuals. But at least it is there. However such organizations also suffer from main known problems, such as yes-men and reluctance to pass bad news up the chain.

At the global level one of the big trends over the last few decades is away from the formal org group minds of nations, churches, and firms, and toward the mob group mind of a world-wide elite. Supported by mob-like expert group minds in academia, law, and media. Our world is thus likely to suffer more soon from mob mind inadequacies.

Prediction markets are capable of creating fast-thinking accurate group minds that consider all relevant levels of abstraction. They can even be asked questions, though not as fluidly and easily as can individuals. If only our mob minds didn’t hate them so much.

GD Star Rating
loading...
Tagged as: , , ,

The InDirect-Check Sweet Spot

I have specialized somewhat in being a generalist intellectual. I know of two key strategies for pursuing this. The first one is pretty obvious, but still important: learn the basics of many different fields. The more fields you know, the more chances you will find to apply an insight in one field into another. So not only learn many fields, but keep looking for connections between them. That is, keep searching for ways to apply the insights in all the fields you know to all the other fields you know.

The second strategy is a bit less obvious. And that is to work hard to collect indirect tests and checks of everything you know. This doesn’t tend to happen naturally, because we mostly tend to learn only very direct tests of what we know.

Consider someone writing an oped. With experience, an oped writer will learn in great detail the emotion tones hit by each thing they might say. So they will learn to say things in ways that hit the right tones the right way at the right times. These are relatively direct tests, but not of the literal truth of each thing said. Instead these are tests of how people will react to things said.

Now consider someone writing code that is close to a user interface. In this sort of context, usually the only ways that the code can be wrong is to fail to give the proper appearances to users. If the system looks right to users, then for the most part it just is right, as there are few concepts of hidden mistakes or errors at this level.

In contrast, consider someone trying to create a computer simulation of a particular scientific model. This simulation could in fact be wrong, even though users don’t see any obvious mistakes. When you learn to write code like this, you have to learn to collect more ways to check your code, to look for errors. At least you do if you expect errors to eventually be discovered, but that it works out much better for you if you find such errors early, yourself, rather than that they be found by others, later.

Similarly, if you want to have your best shot at being a productive generalists, you should be collecting as many ways as possible to check each hypothesis or claim you might come across against all of the other things you know. If this sort of thing were true, then we should expect to see that sort of pattern.

You see, when you try to apply insights from some fields to other distantly related fields, most of the ideas you will come up with won’t be that easy to test or check directly. So if you are to have much of a chance of finding good applications, you’ll need to collect a big toolkit of ways to devise sanity checks that you can apply.

In contrast, most fields don’t really offer very strong incentives to collect indirect tests. Many fields clearly telegraph the conclusions you are supposed to reach, making it easy to check if your conclusions are among the desire ones. In many other fields, such as in writing fiction or sermons, one can test the quality of work relatively directly against how it seems to effect readers. They don’t care much there about any truth beyond created the desired effects in readers.

But when you think about each new field you explore, it will be healthy if you fear the possibility that you will draw a tentative conclusion that will later turn out to look pretty wrong. This will push you to search for many different ways to check each hypothesis, to avoid such scenarios. You may well need to imagine that you will face different critical audiences than the people in those fields, as they may well not really care so much about such global consistency. But you need to, if you would learn to be a productive generalist.

GD Star Rating
loading...
Tagged as: ,

Abstract Views Are Coming

Two years ago I predicted that the future will eventually take a long view:

If competition continues for long enough, eventually our world should contain units that do care about the distant future, and are capable of planning effectively over long timescales. And eventually these units should have enough of a competitive advantage to dominate. … The future not being neglected seems such a wonderfully good outcome that I’m tempted to call the “Long View Day” when this starts one of the most important future dates.

Today I predict that the future will also eventually take a more abstract view, also to its benefit. Let me explain.

Recently I posted on how while we don’t have a world government today, we do now have a close substitute: a strong culture of oft-talking world elites, that can and does successfully pressure authorities everywhere to adopt their consensus regulation opinions. This is much like how in forager bands, the prestigious would gossip to form a consensus plan, which everyone would follow.

This “world forager elite”, as I called them, includes experts, but often overrules them in their areas of expertise. And on the many topics for which this elite doesn’t bother to form a consensus, other institutions and powers are allowed to made key decisions.

The quality of their judgements depends on how able and knowledgeable is this global elite, and on how long and carefully they deliberate on each topic. And these parameters are in turn influenced by the types of topics on which they choose to have opinions, and on how thinly they spread themselves across the many topics they consider.

And this is where abstraction has great potential. For example, in order of increasing generality these elites could form opinions on the particular kinds of straws offered in a particular London restaurant, or on plastic straws in general at all restaurants, or on all kinds of straws used everywhere, on how to set taxes and subsidies for plastic and paper for all food use, or on how to set policy on all plastic and paper taxes and subsidies.

The higher they go up this abstraction ladder, they more that elites can economize on their efforts, to deal with many issues all at once. Yes, it can take more work to reason more abstractly, and there can be more ways to go wrong. And it often helps to first think about concrete examples, and then try to generalize to more abstract conclusions. But abstraction also helps to avoid biases that push us toward arbitrarily treat fundamentally similar things differently. And abstraction can better encompass indirect effects often neglected by concrete analysis. It is certainly my long experience as a social scientist and intellectual that abstraction often pays huge dividends.

So why don’t elites reason more abstractly now? Because they are mostly amateurs who do not understand most topics well enough to abstract them. And because they tend to focus on topics with strong moral colors, for which there is often an expectation of “automatic norms”, wherein we are just supposed to intuit norms without too much explicit analysis.

In the future, I expect us to have smarter better-trained better-selected elites (such as ems), who thus know more basics of more different fields, and are more able to reason abstractly about them. This has been the long term historical trend. Instead of thinking concrete issues through for themselves, and then overruling experts when they disagree, elites are more likely to focus on how manage experts and give them better incentives, so they can instead trust expert judgements. This should produce better judgements about what to regulate how, and what to leave alone how.

The future will take longer, and more abstract, views. And thus make more sensible decisions. Finally.

GD Star Rating
loading...
Tagged as: , , ,

A Perfect Storm of Inflexibility

Most biological species specialize for particular ecological niches. But some species are generalists, “specializing” in doing acceptably well in a wider range of niches, and thus also in rapidly changing niches. Generalist species tend to be more successful at generating descendant species. Humans are such a generalist species, in part via our unusual intelligence.

Today, firms in rapidly changing environments focus more on generality and flexibility. For example, CEO Andy Grove focused on making Intel flexible:

In Only the Paranoid Survive, Grove reveals his strategy for measuring the nightmare moment every leader dreads–when massive change occurs and a company must, virtually overnight, adapt or fall by the wayside–in a new way.

A focus on flexibility is part of why tech firms tend more often to colonize other industries today, rather than vice versa.

War is an environment that especially rewards generality and flexibility. “No plan survives contact with the enemy,” they say. Militaries often lose by preparing too well for the last war, and not adapting flexibly enough to new context. We usually pay extra for military equipment that can function in a wider range of environments, and train soldiers for a wider range of scenarios than we train most workers.

Centralized control has many costs, but one of its benefits is that it promotes rapid thoughtful coordination. Which is why most wars are run from a center.

Familiar social institutions tend to be run by those who have run parts of them well recently. As a result, long periods of peace and stability tend to promote specialists, who have learned well how to win within a relatively narrow range of situations. And those people tend to change our rules and habits to suit themselves.

Thus rule and habit changes tend to improve performance for rulers and their allies within the usual situations, often at the expense of flexibility for a wider range of situations. As a result, long periods of peace and stability tend to produce fragility, making us more vulnerable to big sudden changes. This is in part why software rots, and why institutions rot as well. (Generality is also often just more expensive.)

Through most of the farming era, war was the main driver pushing generality and flexibility. Societies that became too specialized and fragile lost the next big war, and were replaced by more flexible competitors. Revolutions and pandemics also contributed.

As the West has been peaceful and stable for a long time now, alas we must expect that our institutions and culture have been becoming more fragile, and more vulnerable to big unexpected crises. Such as this current pandemic. And in fact the East, which has been adapting to a lot more changes over the last few decades, including similar pandemics, has been more flexible, and is doing better. Being more authoritarian and communitarian also helps, as it tends to help in war-like times.

In addition to these two considerations, longer peace/stability and more democracy, we have two more reasons to expect problems with inflexibility in this crisis. The first is that medical experts tend to think less generally. To put it bluntly, most are bad at abstraction. I first noticed this when I was a RWJF social science health policy scholar, and under an exchange program I went to the RWJF medical science health policy scholar conference.

Biomed scholars are amazing in managing enormous masses of details, and bringing up just the right examples for any one situation. But most find it hard to think about probabilities, cost-benefit tradeoffs, etc. In my standard talk on my book Age of Em, I show this graph of the main academic fields, highlighting the fields I’ve studied:

Academia is a ring of fields where all the abstract ones are on one side, far from the detail-oriented biomed fields on the other side. (I’m good at and love abstractions, but have have limited tolerance or ability for mastering masses of details.) So to the extent pandemic policy is driven by biomed academics, don’t expect it to be very flexible or abstractly reasoned. And my personal observation is that, of the people I’ve seen who have had insightful things to say recently about this pandemic, most are relatively flexible and abstract polymaths and generalists, not lost-in-the-weeds biomed experts.

The other reason to expect a problem with flexibility in responding to this pandemic is: many of the most interesting solutions seem blocked by ethics-driven medical regulations. As communities have strong needs to share ethical norms, and most people aren’t very good at abstraction, ethical norms tend to be expressed relatively concretely. Which makes it hard to change them when circumstances change rapidly. Furthermore we actually tend to punish the exceptional people who reason more abstractly about ethics, as we don’t trust them to have the right feelings.

Now humans do seem to have a special wartime ethics, which is more abstract and flexible. But we are quite reluctant to invoke that without war, even if millions seem likely to die in a pandemic. If billions seemed likely to die, maybe we would. We instead seem inclined to invoke the familiar medical ethics norm of “pay any cost to save lives”, which has pushed us into apparently endless and terribly expensive lockdowns, which may well end up doing more damage than the virus. And which may not actually prevent most from getting infected, leading to a near worst possible outcome. In which we would pay a terrible cost for our med ethics inflexibility.

When a sudden crisis appears, I suspect that generalists tend to know that this is a potential time for them to shine, and many of them put much effort into seeing if they can win respect by using their generality to help. But I expect that the usual rulers and experts, who have specialized in the usual ways of doing things, are well aware of this possibility, and try all the harder to close ranks, shutting out generalists. And much of the public seems inclined to support them. In the last few weeks, I’ve heard far more people say “don’t speak on pandemic policy this unless you have a biomed Ph.D”, than I’ve ever in my lifetime heard people say “don’t speak on econ policy without an econ Ph.D.” (And the study of pandemics is obviously a combination of medical and social science topics; social scientists have much relevant expertise.)

The most likely scenario is that we will muddle through without actually learning to be more flexible and reason more generally; the usual experts and rulers will maintain control, and insist on all the usual rules and habits, even if they don’t work well in this situation. There are enough other things and people to blame that our inflexibility won’t get the blame it should.

But there are some more extreme scenarios here where things get very bad, and then some people somewhere are seen to win by thinking and acting more generally and flexibly. In those scenarios, maybe we do learn some key lessons, and maybe some polymath generalists do gain some well-deserved glory. Scenarios where this perfect storm of inflexibility washes away some of our long-ossified systems. A dark cloud’s silver lining.

GD Star Rating
loading...
Tagged as: , , , ,