Author Archives: Robin Hanson

Three Types of General Thinkers

Ours is an era of rising ideological fervor, moving toward something like the Chinese cultural revolution, with elements of both religious revival and witch hunt repression. While good things may come of this, we risk exaggeration races, wherein people try to outdo themselves to show loyalty via ever more extreme and implausible claims, policies, and witch indicators.

One robust check on such exaggeration races could be a healthy community of intellectual generalists. Smart thoughtful people who are widely respected on many topics, who can clearly see the exaggerations, see that others of their calibre also see them, and who crave such associates’ respect enough to then call out those exaggerations. Like the child who said the emperor wore no clothes.

So are our generalists up to this challenge? As such communities matter to us for this and many other reasons, let us consider more who they are and how they are organized. I see three kinds of intellectual generalists: philosophers, polymaths, and public intellectuals.

Public intellectuals seem easiest to analyze. Compared to other intellectuals, these mix with and are selected more by a wider public and a wider world of elites, and thus pander more to such groups. They less use specialized intellectual tools or language, their arguments are shorter and simpler, they impress more via status, eloquent language, and cultural references, and they must speak primarily to the topics currently in public talk fashion.

Professional philosophers, in contrast, focus more on pleasing each other than a wider world. Compared to public intellectuals, they are more willing to use specialized language for particular topics, to develop intricate arguments, and to participate in back and forth debates. As the habits and tools that they learn can be applied to a pretty wide range of topics, philosophers are in that sense generalists.

But philosophers are also very tied to their particular history. More so than in other disciplines, particular historical philosophers are revered as heroes and models. Frequent readings and discussions of their classic texts pushes philosophers to try to retain their words, concepts, positions, arguments, and analysis styles.

As I use the term, polymaths are intellectuals who meet the usual qualifications to be seen as expert in many different intellectual disciplines. For example, they may publish in discipline-specific venues for many disciplines. More points for a wider range of disciplines, and for intellectual projects that combine expertise from multiple disciplines. Learning and integrating many diverse disciplines can force them to generalize from discipline specific insights.

Such polymaths tend less to write off topics as beyond the scope of their expertise. But they also just write less about everything, as our society offers far fewer homes to polymaths than to philosophers or public intellectuals. They must mostly survive on the edge of particular disciplines, or as unusually-expert public intellectuals.

If the disciplines that specialize in thinking about X tend to have the best tools and analysis styles for thinking about X, then we should prefer to support and listen to polymaths, compared to other types of generalist intellectuals. But until we manage to fund them better, they are rarely available to hear from.

Public intellectuals have the big advantage that they can better get the larger world to listen to their advice. And while philosophers suffer their historical baggage, they have the big advantage of stable funding and freedoms to think about non-fashionable topics, to consider complex arguments, and to pander less to the public or elites.

Aside from more support for polymaths, I’d prefer public intellectuals to focus more on impressing each other, instead of wider publics or elites. And I’d rather they tried to impress each other more with arguments, than with their eliteness and culture references. As for philosophers, I’d rather that they paid less homage to their heritage, and instead more adopted the intellectual styles and habits that are now common across most other disciples. The way polymaths do. I don’t want to cut all differences, but some cuts seem wise.

As to whether any of these groups will effectively call out the exaggerations of the coming era of ideological fervor, I alas have grave doubts.

I wrote this post as my Christmas present to Tyler Cowen; this topic was the closest I could manage to the topic he requested.

GD Star Rating
loading...
Tagged as: ,

We Don’t Have To Die

You are mostly the mind (software) that runs on the brain (hardware) in your head; your brain and body are tools supporting your mind. If our civilization doesn’t collapse but instead advances, we will eventually be able to move your mind into artificial hardware, making a “brain emulation”. With an artificial brain and body, you could live an immortal life, a life as vivid and meaningful as your life today, where you never need feel pain, disease, grime, and your body always looks and feels young and beautiful. That person might not be exactly you, but they could (at first) be as similar to you as the 2001 version of you was to you today. I describe this future world of brain emulations in great detail in my book The Age of Em.

Alas, this scenario can’t work if your brain is burned or eaten by worms soon. But the info that specifies you is now only a tiny fraction of all the info in your brain and is redundantly encoded. So if we freeze all the chemical processes in your brain, either via plastination or liquid nitrogen, quite likely enough info can be found there to make a brain emulation of you. So “all” that stands between you and this future immortality is freezing your brain and then storing it until future tech improves.

If you are with me so far, you now get the appeal of “cryonics”, which over the last 54 years has frozen ~500 people when the usual medical tech gave up on them. ~3000 are now signed up for this service, and the [2nd] most popular provider charges $28K, though you should budget twice that for total expenses. (The 1st most popular charges $80K.) If you value such a life at a standard $7M, this price is worth it even if this process has only a 0.8% chance of working. Its worth more if an immortal life is worth more, and more if your loved ones come along with you.

So is this chance of working over 0.8%? Some failure modes seem to me unlikely: civilization collapses, frozen brains don’t save enough info, or you die in way that prevents freezing. And if billions of people used this service, there’d be a question of if the future is willing, able, and allowed to revive you. But with only a few thousand others frozen, that’s just not a big issue. All these risks together have well below a 50% chance, in my opinion.

The biggest risk you face then is organizational failure. And since you don’t have to pay them if they aren’t actually able to freeze you at the right time, your main risk re your payment is re storage. Instead of storing you until future tech can revive you, they might instead mismanage you, or go bankrupt, allowing you to thaw. This already happened at one cryonics org.

If frozen today, I judge your chance of successful revival to be at least 5%, making this service worth the cost even if you value such an immortal future life at only 1/6 of a standard life. And life insurance makes it easier to arrange the payment. But more important, this is a service where the reliability and costs greatly improve with more customers. With a million customers, instead of a thousand, I estimate cost would fall, and reliability would increase, each by a factor of ten.

Also, with more customers cryonics providers could afford to develop plastination, already demonstrated in research, into a practical service. This lets people be stored at room temperature, and thus ends most storage risk. Yes, with more customers, each might need to also pay to have future folks revive them, and to have something to live on once revived. But long time delays make that cheap, and so with enough customers total costs could fall to less than that of a typical funeral today. Making this a good bet for most everyone.

When the choice is between a nice funeral for Aunt Sally or having Aunt Sally not actually die, who will choose the funeral? And by buying cryonics for yourself, you also help move us toward the low cost cryonics world that would be much better for everyone. Most people prefer to extend existing lives over creating new ones.

Thus we reach the title claim of this post: if we coordinated to have many customers, it would be cheap for most everyone to not die. That is: most everyone who dies today doesn’t actually need to die! This is possible now. Ancient Egypt, relative rationalists among the ancients, paid to mummify millions, a substantial fraction of their population, and also a similar number of animals, in hope of later revival. But we now actually can mummify to allow revival, yet we have only done that to 500 people, over a period when over 4 billion people have died.

Why so few cryonics customers? When I’ve taught health economics, over 10% of students judge the chances of cryonics working to be high enough to justify a purchase. Yet none ever buy. In a recent poll, 31.5% of my followers said they planned to sign up, but few have. So the obstacle isn’t supporting beliefs, it is the courage to act on such beliefs. It looks quite weird to act on a belief in cryonics. So weird that spouses often divorce those who do. (But not spouses who spend a similar amounts to send their ashes into space, which looks much less weird.) We like to think we tolerate diversity, and we do for unimportant stuff, but for important stuff we in fact impose strongly penalize diversity.

Sure it would help if our official medical experts endorsed the idea, but they are just as scared of non-conformity, and also stuck on a broken concept of “science” which demands someone actually be revived before they can declare cryonics feasible. Non-medical scientists like that would insist we can’t say our sun will burn out until it actually does, or that rockets could take humans to Mars until a human actually stands on Mars. The fact that their main job is to prevent death and they could in fact prevent most death doesn’t weigh much on them relative to showing allegiance to a broken science concept.

Severe conformity pressures also seem the best explanation for the bizarre range of objections offered to cryonics, objections that are not offered re other ways to cut death rates. The most common objection offered is just that it seems “unnatural”. My beloved colleague Tyler said reducing your death rate this way is selfish, you might be tortured if you stay alive, and in an infinite multiverse you can never die. Others suggest that freezing destroys your soul, that it would hurt the environment, that living longer would slows innovation, that you might be sad to live in a world different from that of your childhood, or that it is immoral to buy products that not absolutely everyone can afford.

While I wrote a pretty similar post a year ago, I wrote this as my Christmas present to Alex Tabarrok, who requested this topic.

Added 17Dec: The chance the future would torture a revived you is related to the chance we would torture an ancient revived today:

Answers were similar re a random older person alive today. And people today are actually tortured far less often than this suggests, as we organize society to restrain random individual torture inclinations. We should expect the future to also organize to prevent random torture, including of revived cryonics patients.

Also, if their were millions of such revived people, they could coordinate to revive each other and to protect each other from torture. Torture really does seem a pretty minor issue here.

GD Star Rating
loading...
Tagged as: ,

How Group Minds Differ

We humans have remarkable minds, minds more capable in many ways that in any other animal, or any artificial system so far created. Many give a lot of thought to the more capable artificial “super-intelligences” that we will likely create someday. But I’m more interested now in the “super-intelligences” that we already have: group minds.

Today, groups of humans together form larger minds that are in many ways more capable than individual minds. In fact, the human mind evolved mainly to function well in bands of 20-50 foragers, who lived closely for many years. And today the seven billion of us are clumped together in many ways into all sorts of group minds.

Consider a four-way classification:

  1. Natural – The many complex mechanisms we inherit from our forager ancestors enable us to fluidly and effectively manage small tightly-interacting group minds without much formal organization.
  2. Formal – The formal structures of standard organizations (i.e., those with “org charts”) allow much larger group minds for firms, clubs, and governments.
  3. Mobs = Loose informal communities structured mainly by simple gossip and status, sometimes called “mobs”, often form group minds on vast, even global, scales.
  4. Special – Specialized communities like academic disciplines can often form group minds on particular topics using less structure.

A quick web search finds that many embrace the basic concept of group minds, but I found few directly addressing this very basic question: how do group minds tend to differ from individual human minds? The answer to this seems useful in imagining futures where group minds matter even more than today.

In fact, future artificial minds are likely to be created and regulated by group minds, and in their own image, just as the modularity structure of software today usually reflects the organization structure of the group that made it. The main limit to getting better artificial minds later might be in getting better group minds before then.

So, how do group minds differ from individual minds? I can see several ways. One obvious difference is that, while human brains are very parallel computers, when humans reason consciously, we tend to reason sequentially. In contrast, large group minds mostly reason in parallel. This can make it a little harder to find out what they think at any one time.

Another difference is that while human brains are organized according to levels of abstraction, and devote roughly similar resources to different abstraction levels, standard formal organizations devote far fewer resources to higher levels of abstraction. It is hard to tell if mobs also suffer a similar abstract-reasoning deficit.

As mobs lack centralized coordination, it is much harder to have a discussion with a mob, or to persuade a mob to change its mind. It is hard to ask a mob to consider a particular case or argument. And it is especially hard to have a Socratic dialogue with a mob, wherein you ask it questions and try to get it to admit that different answers it has given contradict each other.

As individuals in mobs have weaker incentives regarding accuracy, mobs try less hard to get their beliefs right. Individual in mobs instead have stronger incentives to look good and loyal to other mob members. So mobs are rationally irrational in elections, and we created law to avoid the rush-to-judgment failures of mobs. As a result, mobs more easily get stuck on particular socially-desirable beliefs.

When each person in the mob wants to show their allegiance and wisdom by backing a party line, it is harder for such a mob to give much thought to the possibility that its party line might be wrong. Individual humans, in contrast, are better able to systematically consider how they might be wrong. Such thoughts more often actually induce them to change their minds.

Compared to mobs, standard formal orgs are at least able to have discussions, engage arguments, and consider that they might be wrong. However, as these happen mostly via the support of top org people, and few people are near that top, this conversation capacity is quite limited compared to that of individuals. But at least it is there. However such organizations also suffer from main known problems, such as yes-men and reluctance to pass bad news up the chain.

At the global level one of the big trends over the last few decades is away from the formal org group minds of nations, churches, and firms, and toward the mob group mind of a world-wide elite. Supported by mob-like expert group minds in academia, law, and media. Our world is thus likely to suffer more soon from mob mind inadequacies.

Prediction markets are capable of creating fast-thinking accurate group minds that consider all relevant levels of abstraction. They can even be asked questions, though not as fluidly and easily as can individuals. If only our mob minds didn’t hate them so much.

GD Star Rating
loading...
Tagged as: , , ,

What Hypocrisy Feels Like

Our book The Elephant in the Brain argues that there are often big differences between the motives by which we sincerely explain our behavior, and the motives that more drive and shape that behavior. But even if this claim seems plausible to you in the abstract, you might still not feel fully persuaded, if you find it hard to see this contrast clearly in a specific example.

That is, you might want to see what hypocrisy feels like up close. To see the two different kinds of motives in you in a particular case, and see that you are inclined to talk and think in terms of the first, but see your concrete actions being more driven by the second.

If so, consider the example of utopia, or heaven. When we talk about an ideal world, we are quick to talk in terms of the usual things that we would say are good for a society overall. Such as peace, prosperity, longevity, fraternity, justice, comfort, security, pleasure, etc. A place where everyone has the rank and privileges that they deserve. We say that we want such a society, and that we would be willing to work and sacrifice to create or maintain it.

But our allegiance to such a utopia is paper thin, and is primarily to a utopia described in very abstract terms. Our abstract thoughts about utopia generate very little emotional energy in us, and our minds quickly turn to other topics. In addition, as soon as someone tries to describe a heaven or utopia in vivid concrete terms, we tend to be put off or repelled. Even if such a description satisfies our various abstract good-society features, we find reasons to complain. No, that isn’t our utopia, we say. Even if we are sure to go to heaven if we die, we don’t want to die.

And this is just what near-far theory predicts. Our near and far minds think differently, with our far minds presenting a socially desirable image to others, and our near minds more in touch with what we really want. Our far minds are more in charge when we are prompted to think abstractly and hypothetically, but our near minds are more in charge when we privately make real concrete choices.

Evolved minds like ours really want to win the evolutionary game. And when there are status hierarchies tied to evolutionary success, we want to rise in those hierarchies. We want to join a team, and help that team win, as long as that team will then in turn help us to win. And we see all this concretely in the data; we mainly care about our social rank:

The outcome of life satisfaction depends on the incomes of others only via income rank. (Two followup papers find the same result for outcomes of psychological distress and nine measures of health.) They looked at 87,000 Brits, and found that while income rank strongly predicted outcomes, neither individual (log) income nor an average (log) income of their reference group predicted outcomes, after controlling for rank (and also for age, gender, education, marital status, children, housing ownership, labor-force status, and disabilities). (more)

But this isn’t what we want to think, or to say to others. With our words, and with other very visible cheap actions, we want to be pro-social. That is, we want to say that we want to help society overall. Or at least to help our society. While we really crave fights by which we might rise relative to others, we want to frame those fights in our minds and words as fighting for society overall, such as by fighting for justice against the bad guys.

And so when the subject of utopia comes up, framed abstractly and hypothetically, we first react with our far minds: we embrace our abstract ideals. We think we want them embodied in a society, and we think we want to work to create that society. And our thoughts remain this way as long as the discussion remains abstract, and we aren’t at much risk of actually incurring substantial supporting personal costs.

But the more concrete the discussion gets, and the closer to asking for concrete supporting actions, the more we recoil. We start to imagine a real society in detail wherein we don’t see good opportunities for our personal advancement over others. And where we don’t see injustices which we could use as excuses for our fights. And our real motivations, our real passions, tell us that they have reservations; this isn’t the sort of agenda that we can get behind.

So there it is: your hypocrisy up close and personal, in a specific case. In the abstract you believe that you like the idea of utopia, but you recoil at most any concrete example. You assume you have a good pro-social reason for your recoil, and will mention the first candidate that comes to your head. But you don’t have a good reason, and that’s just what hypocrisy feels like. Utopia isn’t a world where you can justify much conflict, but conflict is how you expect to win, and you really really want to win. And you expect to win mainly at others’ expense. That’s you, even if you don’t like to admit it.

GD Star Rating
loading...
Tagged as:

Coming Commitment Conflicts

If competition, variation, and selection long continues, our worlds will become dominated by artificial creatures who take a long view of their future, and who see themselves as directly and abstractly valuing having more distant descendants. Is there anything more we robustly predict about them?

Our evolving descendants will form packages wherein each part of the package promotes reproduction of other package parts. So a big question is: how will they choose their packages? While some package choices will become very entrenched, like the different organs in our bodies, other choices may be freer to change at the last minute, like political coalitions in democracies. How will our descendants choose such coalition partners?

One obvious strategy is to make deals with coalition partners to promote each other’s long term reproduction. Some degree of commitment is probably optimal, and many technologies of commitment will likely be available. But note: it is probably possible to over-commit, by committing too wide a range of choices over too long a time period with too many partners, and to under-commit, committing too few choices over too short a time period with too few partners. Changed situations call for changed coalitions. Thus our descendants will have to think carefully about how strongly and long to commit on what with whom.

But is it even possible to enforce deals to promote the reproduction of a package? Sure, the amount of long-term reproduction of a set of features or a package subset seems a clearly measurable outcome, but how could such a team neutrally decide which actions best promote that overall package? Wouldn’t the detailed analyses that each package part offers on such a topic tend to be biased to favor those parts? If so, how could they find a neutral analyses to rely on?

My work on futarchy lets me say: this is a solvable problem. Because we know that futarchy would solve this. A coalition could neutrally but expertly decide what actions would promote their overall reproduction by choosing a specific ex-post-numeric-measure of their overall reproduction, and then creating decision markets to advise on each particular decision where concrete identifiable options can be found.

There may be other ways to do this, and some ways may even be better than decision markets. But it clearly is possible for future coalitions to neutrally and expertly decide what shared actions would promote their overall reproduction. So as long as they can make such actions visible to something like decisions markets, coalitions can reliably promote their joint reproduction.

Thus we can foresee an important future activity: forming and reforming reproduction coalitions.

GD Star Rating
loading...
Tagged as: ,

On Evolved Values

Biological evolution selects roughly for creatures that do whatever it takes to have more descendants in the long run. When such creatures have brains, those brains are selected for having supporting habits. And to the extent that such brains can be described as having beliefs and values that combine into actions via expected utility theory, then these beliefs and values should be ones which are roughly behaviorally-equivalent to the package of having accurate beliefs, and having values to produce many descendants (relative to rivals). Equivalent at least within the actual environments in which those creatures were selected.

Humans have unusually general brains, with which we can think unusually abstractly about our beliefs and values. But so far, we haven’t actually abstracted our values very far. We instead have a big mess of opaque habits and desires that implicitly define our values for us, in ways that we poorly understand. Even though what evolution has been selecting for in us can in fact be described concisely and effectively in an abstract way.

Which leads to one of the most disturbing theoretical predictions I know: with sufficient further evolution, our descendants are likely to directly and abstractly know that they simply value more descendants. In diverse and varying environments, such a simpler more abstract representation seems likely to be more effective at helping them figure out which actions would best achieve that value. And while I’ve personally long gotten used to the idea that our distant descendants will be weird, to (the admittedly few) others who care about the distant future, this vision must seem pretty disturbing.

Oh there are some subtleties regarding whether all kinds of long-term descendants get the same weight, to what degree such preferences are non-monotonic in time and number of descendants, and whether we care the same about risks that are correlated or not across descendants. But those are details: evolved descendants should more simply and abstractly value more descendants.

This applies whether our descendants are biological or artificial. And it applies regardless of the kind of environments our descendants face, as long as those environments allow for sufficient selection. For example, if our descendants live among big mobs, who punish them for deviations from mob-enforced norms, then our descendants will be selected for pleasing their mobs. But as an instrumental strategy for producing more descendants. If our descendants have a strong democratic world government that enforces rules about who can reproduce how, then they will be selected for gaining influence over that government in order to gain its favors. And for an autocratic government, they’d be selected for gaining its favors.

Nor does this conclusion change greatly if the units of future selection are larger than individual organisms. Even if entire communities or work teams reproduce together as single units, they’d still be selected for valuing reproduction, both of those entire units and of component parts. And if physical units are co-selected with supporting cultural features, those total physical-plus-cultural packages must still tend to favor the reproduction of all parts of those packages.

Many people seem to be confused about cultural selection, thinking that they are favored by selection if any part of their habits or behaviors is now growing due to their actions. But if, for example, your actions are now contributing to a growing use of the color purple in the world, that doesn’t at all mean that you are winning the evolutionary game. If wider use of purple is not in fact substantially favoring the reproduction of the other elements of the package by which you are now promoting purple’s growth, and if those other elements are in fact reproducing less than their rivals, then you are likely losing, not winning, the evolutionary game. Purple will stop growing and likely decline after those other elements sufficiently decline.

Yes of course, you might decide that you don’t care that much to win this evolutionary game, and are instead content to achieve the values that you now have, with the resources that you can now muster. But you must then accept that tendencies like yours will become a declining fraction of future behavior. You are putting less weight on the future compared to others who focus more on reproduction. The future won’t act like you, or be as much influenced by acts like yours.

For example, there are “altruistic” actions that you might take now to help out civilization overall. For example, you might build a useful bridge, or find some useful invention. But if by such actions you hurt the relative long-term reproduction of many or most of the elements that contributed to your actions, then you must know you are reducing the tendency of descendants to do such actions. Ask: is civilization really better off with more such acts today, but fewer such acts in the future?

Yes, we can likely identify some parts of our current packages which are hurting, not helping, our reproduction. Such as genetic diseases. Or destructive cultural elements. It makes sense to dump such parts of our reproduction “teams” when we can identify them. But that fact doesn’t negate the basic story here: we will mainly value reproduction.

The only way out I see is: stop evolution. Stop, or slow to a crawl, the changes that induce selection of features that influence reproduction. This would require a strong civilization-wide government, and it only works until we meet the other grabby aliens. Worse, in an actually changing universe, such stasis seems to me to seriously risk rot. Leading to a slowly rotting civilization, clinging on to its legacy values but declining in influence, at least relative to its potential. This approach doesn’t at all seems worth the cost to me.

But besides that, have a great day.

Added 7p: There many be many possible equilibria, in which case it may be possible to find an equilibrium in which maximizing reproduction also happens to maximize some other desired set of values. But it may be hard to maintain the context that allows that equilibrium over long time periods. And even if so, the equilibrium might itself drift away to support other values.

Added 8Dec: This basic idea expressed 14 years ago.

GD Star Rating
loading...
Tagged as: , ,

Argument Foreplay

The most prestigious articles in popular media tend to argue for a (value-adjacent) claim. And such articles tend to be long. Even so, most can’t be bothered to define their terms carefully, or to identify and respond to the main plausible counter-arguments to their argument. Such articles are instead filled with anecdotes, literary allusions, and the author’s history of thoughts on the subject. A similar thing happens even in many academic philosophy papers; they leave little space for their main positive argument, which is then short and weakly defended.

Consider also that while a pastor usually considers his or her sermon to be the “meat” of their service, that sermon takes a minority of the time, and is preceded by a great many other rituals, such as singing. And internally such sermons are usually structured like those prestigious media articles. The main argument is preceded by many not-logically-necessary points, leaving little time to address ambiguities or counter-arguments.

And consider sexual foreplay. Even people in a state where they are pretty excited, attracted, and willing are often put off by a partner pushing for too direct or rapid a transition to the actual sex act. They instead want a gradual series of increasingly intense and close interactions, which allow each party to verify that the other party has similar feelings and intentions.

In meals, we don’t want to get straight to a “main dish”, but prefer instead a series of dishes of increasing intensity. The main performers in concerts and political rallies are often preceded by opening acts. Movies in theaters used to be preceded by news and short films, and today are preceded by previews. Conversations often make use of starters and icebreakers; practical conversations are supposed to be preceded by small-talk. And revolutions may be preceded by increasingly dramatic riots and demonstrations.

What is going on here? Randall Collins’ book Interaction Ritual Chains explained this all for me. We humans often want to sync our actions and attention, to assure each other than we feel and think the same. And also that our partners are sufficiently skilled and impressive at this process.
The more important is this assurance, the more we make sure to sync, and the more intensely and intricately we sync. And where shared values and attitudes are important to us, we make sure that those are strongly salient and relevant to our synced actions.

Regarding media articles and sermons, a direct if perhaps surprising implication of all this is that most of us are often not very open to hearing and being persuaded by arguments until speakers show us that they sufficiently share our values, and are sufficiently impressive in this performance. So getting straight to the argument point (as I often do) is often seen as rude and offensive, like a would-be seducer going straight to “can I put it in.”

The lack of attention to argument precision and to counter-arguments bothers them less, as they are relatively wiling to accept a claim just on the basis of the impressiveness and shared values of the speaker. Yes, they want to be given at least one supporting argument, in case they need justify their new position to challengers. But the main goal is to share beliefs with impressive value allies.

GD Star Rating
loading...
Tagged as: ,

The Coming World Ruling Class

When I got my Ph.D. in formal political theory, I learned that the politics of large democratic polities today, such as metropolises, states, and nations, are usually aligned along a single “ideological” dimension. (E.g., “left” vs. “right”.) What exactly that dimension is about, however, has varied greatly across times and places. It seems to more result from a game theoretic equilibrium than from a single underlying dimension of choice; the real policy space remains highly dimensional.

However, it wasn’t until years later than I noticed that this is not usually true for the politics of families, firms, clubs, towns, and small cities. These usually are usually run by a single stable dominant coalition, i.e., a ruling class. As were most ancient societies in history, at least eventually.

This ruling class might sometimes offer their larger community some options to choose between. But mostly this is when the ruling elite can’t decide, or wants to make others feel more involved. Such as who exactly to put at the top most visible positions. Sometimes real fights break out among coalitions within the elite, but these fights tend to be short and behind the scenes.

The same applies to communities with no formal organization. That is, to “mobs”. While in the modern world large mobs tend to split along a main ideological dimension, small mobs tend to be dominated by a main consensus, who roughly agree on what to do and how. Though with time, smaller mobs are more often becoming aligned to larger political ideologies.

This one-dimensional story also does not apply to large ancient areas which encompassed many different polities. These areas look more like a disorganized set of competing interests. So a one dimensional political alignment isn’t a fully general law of politics; it has a domain of applicability.

A few centuries ago, the world was composed of many competing nations, with no overall organization. During the great world wars, and the Cold War, there was an overall binary alignment. Since the end of the Cold War, we have seen a single coalition dominate the world. And over recent decades we have seen policy around the world converge greatly around the opinions of an integrated world elite.

I’m tempted to put this all together into the following integrated theory of a standard progression. Imagine suddenly moving a large random group of diverse strangers to a new isolated area, where they could survive indefinitely. At first their choices would be individual. Then they’d organize into small groups that coordinate together. Then into larger groups.

Eventually many large groups might compete for control of the area, or for the allegiance of the people there. In their bids for control, such groups might emphasize how much they respect the many kinds of diversity represented by people in the area. They don’t intend to repress other groups, they just want to rule for the good of all. As people became more similar, they would bother less with such speeches.

Eventually, these groups would merge and align along a single main dimension, which might be labeled in terms of two main rival groups, or in terms of some ideological axis. For a while, the two sides of this main dimension might find themselves at a stalemate. Or one side might tend to win, but the midpoint of their conflict might be continually redefined to result in two roughly equally sized sides. This main ideological dimension would encompass many issues, but hardly all. It might encompass more issues as the fight for control got fiercer. But the fight should get weaker as outside threats became more salient.

Eventually a single coalition would come to dominate. Especially in a society with many “high grounds” which such a coalition could come to control. This situation might then oscillate between a single ruling elite and a main axis of conflict. But slowly over time, a single coalition would win out more. The members of the ruling elite would come to know each other better, become more similar, and agree more on who should be among their members, and on what are the “serious” policies worth considering. They would focus more on reassuring each other of loyal to their class, and on making sure their kids could join that elite.

A ruling coalition who felt insecure in its power might work harder to seek out and repress any potential dissent. At the extreme, it might create a totalitarian regime that demanded allegiance and conformity in every little area of life. And it might focus more on entrenching itself than on improving society as a whole. As a ruling coalition became more secure, it might more tolerate dissent, and demand less conformity, but also focus on internal conflicts and division of spoils, instead of its society as a whole.

This story seems to roughly describe national, and world, history. My nation is becoming more integrated and similar over time, with actions coordinated at larger scales, national politics coming more to dominate local politics, and national politics coming to color more areas and issues in life. And a single issue axis aligned to a global cultural elite is coming to dominate politics across the world.

It seems plausible that toward the end of the transition between a period of one main ideological dimension, and a period of a single integrated ruling class, the final main political dimension would be aligned for and against that final ruling class. The last ideology question would be: shall we let this ruling class take over?

That is, shall we let this small subset of us define for us who are “serious” candidates for leadership and what are “serious” policy positions worthy of consideration? As such ruling classes now decide in firms, towns, etc. today. A sign of the end would be when one side of the political axis kept putting up candidates for office who were consistently declared “not serious” by the elites who controlled the the main commanding heights of power, such as media, law, universities, regulators, CEOs, etc.

The pro-ruling-class side would be more dominant in places that are more integrated with the overall culture, and less dominate in places that cared more about local issues. Such as in larger cities, compared to towns.

This model suggests that our current era of roughly balanced forces on two sides of one main ideological axis may be temporary. As the world becomes more closely integrated and similar, eventually a single integrated elite culture will dominate the world, entrenching itself in mob opinion and via as many institutions as possible, especially global institutions.

This world ruling class may then focus more on further entrenching itself, and on repressing dissent more than on making the world better. As everyone becomes more similar, conformity pressures will become stronger, as in most small towns today. Plausibly cutting many kinds of innovation. And our entrenched global institutions may then rot. After which our total human civilization might even decline, or commit suicide.

This may take centuries, but that’s really not very long in the grand scheme of things.

GD Star Rating
loading...
Tagged as: , , ,

Will World Government Rot?

We have seen a centuries-long increase in the scale and scope of governance, and today we see many forms of global governance. While the literature has so far identified many costs and benefits of global governance, I here suggest that we add one so-far-neglected consideration to the list: rot. While many kinds of systems tend to innovate and grow with time, other kinds of systems tend to rot, decay, and die. We should consider the risks that global governance may increase the rot of our total world system.

Global Governance

Over the last millennia, the scale of nations has increased, as has the scope and intensity of governance. Particular governance functions have tended to migrate to larger scales, from local to regional to national to global. At the global level, we have increasingly many organizations with increasing abilities to coordinate policy in many particular areas.

In addition to formal organizations like the United Nations and the World Trade Organization, we also see an increasingly strong informal global convergence of policy across many areas, such as regarding pandemics, medicine, finance, schools, nuclear, aviation, telecom, and media. This is plausibly due to an increasingly integrated global community of elites and policy-makers, an integration which makes policy-makers in each nation reluctant to deviate far from global policy consensus.

How much wider and stronger might global governance become, and what might be the costs and benefits of such changes? An old literature had identified many relevant factors (Glossop 1993; Alesina & Spolaore 2003; Deudney 2008).

On the plus side, larger scale governance allows for wider standardization, and more trade and migration over larger scales. It also allows for more production of larger-scale public goods such as the promotion of innovation, and dealing with global problems such as CO2 warming. Also, global governance can suppress inter-state warfare.

On the minus side, however, large scale governance encompasses more diverse places, cultures, and populations, and this diversity is an obstacle to coordination. It suggests more internal conflicts within these global systems, and more difficulty reaching consensus, perhaps even leading to armed rebellion. Also, as the threat of external competition weakens, larger scale political processes become freer to focus on internal conflicts and rent seeking, and governance units become freer to suppress dissent and to entrench themselves. Global governance also becomes a single point of failure for the globe, for example increasing risks of both global suicide and of a global totalitarian regime well-entrenched against resistance.

The purpose of this short paper/post is to add one more consideration to this list: rot.

The Question of Rot

Some kinds of systems rot and decay, while other kinds grow and improve. To better judge the potential for rot in our total world system, we need to better understand what distinguishes these two kinds of systems.

For example, over time whole biospheres like Earth seem to slowly accumulate innovations and to spread into more environmental niches. But the individual organisms of which such biospheres are made tend to decay and die, after an initial period of growth. Most individual species, adapted to relatively stable environments, may slowly rot, to be outweighed by the few rare species adapted to varied and changing environments, forcing them to abstract and remain flexible.

Non-trivial software systems seem to consistently rot and decay (Kruchten et al. 2012; Izurieta & Bieman 2013). Software changes resulting from new features and changing hardware and customer environments tend to be haphazard, resulting in more interdependences between previously relatively modular subsystems. This interdependence makes further changes increasingly expensive, so that the system becomes more inflexible and changes less.

While efforts to “refactor” such systems, by streamlining their overall structures, can temporarily increase flexibility, large software systems are almost always eventually discarded, to be replaced by new systems rewritten from scratch.

Over time, legal systems seem to similarly become more complex, interdependent, and resistant to change. Sometimes legal systems are “refactored” to increase flexibility, such as when the Roman emperor Justinian arranged for a restructuring and simplification of the Roman legal code. This Justinian code was later adapted by Napolean, who spread it across Europe, after which European conquests spread it across the world.

While the rate at which firms die does not seem to depend on age (Daepp 2015), older firms do tend to grow at a lower rate (Hosono et al. 2020). That is, individual firms rot.

While industries supplied by many diverse firms seem to consistently grow and innovate, such innovation is greatly reduced when industries are dominated by a very small number of firms (Peneder & Woerter 2014; Delbono & Lambertini 2020). Industry innovation can also be greatly reduced by intrusive and globally coordinate regulations. For example, in the nuclear industry strong regulation has resulting in greatly increasing costs, greatly curtailing its potential (Haas 2019; Hall 2021).

Across human history, entire civilizations and empires also seem to consistently rise and then fall, suggesting that empires also rot (Turchin & Nefedov 2009). Will today’s integrated world economy and culture also rot for similar reasons, or will some important difference in today’s world civilization prevent that?

Does World Government Rot?

So now we reach the crucial question: are our new systems of global governance more like an open field of competition that innovates and grows, as do open industries and biospheres? Or are they more like individual organisms, firms, empires, and software and legal systems, or like overly-concentrated or overly-regulated industries? Which tend to decay and rot. What are the key parameters that determine renewal versus rot, and how can they be mapped onto systems of global governance? And can we identify the safest least-rotting variations to recommend? Is it sufficient to keep such systems very simple and modular, allowing few dependencies?

References

Alberto Alesina, Enrico Spolaore (2003) The Size of Nations, The MIT Press, November 7.

Madeleine I. G. Daepp , Marcus J. Hamilton , Geoffrey B. West and Luís M. A. Bettencourt 2015. “The mortality of companies” Interface 6, May.

Flavio Delbono, Luca Lambertini (2020) “Innovation and product market concentration: Schumpeter, arrow, and the inverted U-shape curve.” Oxford Economic Papers, November,

Daniel H. Deudney (2008) Bounding Power: Republican Security Theory from the Polis to the Global Village. Princeton University Press, November 9.

Ronald J. Glossop (1993) World Federation?: A Critical Analysis of Federal World Government.
McFarland Publishing, July 1.

Reinhard Haas, Stephen Thomas, Amela Ajanovic (2019) “The Historical Development of the Costs of Nuclear Power” in The Technological and Economic Future of Nuclear Power, pp.97-115.

J. Storrs Hall (2021) Where Is My Flying Car? Stripe Press, November 30.

Kaoru Hosono, Miho Takizawa, Kenta Yamanouchi (2020), “Firm Age, Productivity, and Intangible Capital.” RIETI Discussion Paper 20-E-001.

Clemente Izurieta & James M. Bieman (2013) “A multiple case study of design pattern decay, grime, and rot in evolving software systems” Software Quality Journal 21:289–323.

Philippe Kruchten; Robert L. Nord; Ipek Ozkaya (2012) “Technical Debt: From Metaphor to Theory and Practice” IEEE Software 29(6):,18 – 21, Nov-Dec.

Peter Turchin, Sergey A. Nefedov (2009) Secular Cycles Princeton University Press, August 9.

Peneder M. Woerter M. (2014) “Competition, R&D and innovation: testing the inverted-U in a simultaneous system.” Journal of Evolutionary Economics 24:653–87.

GD Star Rating
loading...
Tagged as: , ,

Minds Almost Meeting

Many travel to see exotic mountains, buildings, statues, or food. But me, I want to see different people. If it could be somehow arranged, I’d happily “travel” to dozens of different subcultures that live within 100 miles of me. But I wouldn’t just want to walk past them, I’d want to interact enough to get in their heads.

Working in diverse intellectual areas has helped. So far, these include engineering, physics, philosophy, computer science, statistics, economics, polisci, finance, futurism, psychology, and astrophysics. But there are so many other intellectual areas I’ve hardly touched, and far more non-intellectual heads of which I’ve seen so little.

Enter the remarkable Agnes Callard with whom I’ve just posted ten episodes of our new podcast “Minds Almost Meeting”:

Tagline: Agnes and Robin talk, try to connect, often fail, but sometimes don’t.

Summary: Imagine two smart curious friendly and basically truth-seeking people, but from very different intellectual traditions. Traditions with different tools, priorities, and ground rules. What would they discuss? Would they talk past each other? Make any progress? Would anyone want to hear them? Economist Robin Hanson and philosopher Agnes Callard decided to find out.

Topics: Paradox of Honesty, Plagiarism, Future Generations, Paternalism, Punishment, Pink and Purple, Aspiration, Prediction Markets, Hidden Motives, Distant Signals.

It’s not clear who will be entertained by our efforts, but I found the process fascinating, informative, and rewarding. Though our audio quality was low at times, it is still understandable.

Agnes is a University of Chicago professor of philosophy and a rising-star “public intellectual” who often publishes in places like The New Yorker. She and I are similar in both being oddball, hard-to-offend, selfish parents and academics. We both have religious upbringings, broad interests, and a taste for abstraction. But we differ by generation, gender, and especially in our intellectual backgrounds and orientations (me vs. her): STEM vs. humanities, futurist vs. classicist, explaining via past shapings vs. future aspirations, and relying more vs. less on large systems of thought.

Before talking to Agnes, I hadn’t realized just how shaped I’ve been by assimilating many large formal systems of thought, such as calculus, physics, optimization, algorithms, info theory, decision theory, game theory, economics, etc. Though the core of these systems can be simple, each has been connected to many diverse applications, and many larger analysis structures have been built on top of them.

Yes these systems, and their auxiliary structures and applications, are based on assumptions that can be wrong. But their big benefit is that shared efforts to use them have rooted out many (though hardly all) contradictions, inconsistencies, and incoherences. So my habit of trying when possible to match any new question to one of these systems is likely to, on average, produce a more coherent resulting analyses. I’m far more interested in applying existing systems to big neglected topics than in inventing new systems.

In contrast, though philosophers like Agnes who rely on few such structures beyond simple logic can expect their arguments to be accessible to wider audiences, they must also expect a great many incoherences in their analysis. Which is part of why they so often disagree, and build such long chains of back and forth argumentation. I agree with Tyler, who in his conversation with Agnes said these long chains suggest a problem. However, I do see the value of having some fraction of intellectuals taking this simple robust strategy, as a complement to more system-focused strategies.

Thank you Agnes Callard, for helping me to see a wider intellectual world, including different ways of thinking and topics I’ve neglected.

GD Star Rating
loading...
Tagged as: , ,