Tag Archives: History

Science 2.0

Skepticism … is generally a questioning attitude or doubt towards one or more items of putative knowledge or belief or dogma. It is often directed at domains, such as the supernatural, morality (moral skepticism), theism (skepticism about the existence of God), or knowledge (skepticism about the possibility of knowledge, or of certainty). (More)

Humans have long had many possible sources for our beliefs about the physical world. These include intuitive folk physics, sacred scriptures, inherited traditions, traveler stories, drug-induced experiences, gadget sales pitches, and expert beliefs within various professions. And for a very long time, we paid the most attention to the highest status sources, even if they were less reliable. This encouraged gullibility; we often believed pretty crazy stuff, endorsed by the high status.

One ancient high status group was astronomers, whose status was high because their topic was high – the sky above. It so happened that astronomers naturally focused on a small number of very standard parameters of wide interest: the sky positions of planets and comets (anything that moved relative to the stars). Astronomers often gained status by being better able to predict these positions, and for this purpose they found it useful to: (1) collect and share careful records on past positions, (2) master sufficient math to precisely describe past patterns, and (3) use those patterns to predict future parameter values.

For a long time astronomy seemed quite exceptional. Most other domains of interest seemed to have too much fuzziness, change, and variety to support a similar approach. What can you usefully measure while walking through a jungle? What useful general patterns can simple math describe there? But slowly and painfully, humans learned to identify a few relatively stable focal parameters of wide interest in other domains as well. First in physics: velocity, weight, density, temperature, pressure, toughness, heat of reaction, etc. Then in dozens of practical domains.

With such standard focal parameters in hand, domain experts also gained status by being able to predict future parameter values. As a result, they also learned that it helped to carefully collect shared systematic data, and to master sufficient math to capture their patterns.

And thus beget the scientific revolution, which helped beget the industrial revolution. A measurement revolution starting in astronomy, moving to physics, and then invading dozens of industrial domains. As domains acquired better stable focal parameters to observe, and better predictions, many such domains acquired industrial power. That is, those who had mastered such things could create devices and plans of greater social value. This raised the status of such domain experts, so that eventually this “scientific” process acquired high status: carefully collecting stable focal parameters, systematically collecting and sharing data on them, and making math models to describe their patterns. “Science” was high status.

One way to think about all this is in terms of the rise of skepticism. If you allow yourself to doubt if you can believe what your sources tell you about the physical world, your main doubt will be “who can I trust?” To overcome such doubt, you’ll want to focus on a small number of focal parameters, and for those seek shared data and explicit math models. That is, data where everyone can check how the data is collected, or collect it themselves, with redundant records to protect against tampering, and explicit shared math models describing their patterns. That is, you will turn to the methods to which those astronomers first turned.

Which is all to say that the skeptics turned out to be right. Not the extreme skeptics who doubted their own eyes, but the more moderate ones, who doubted holy scriptures and inherited traditions. Our distant ancestors were wrong (factually, if not strategically) to too eagerly trust their high status sources, and skeptics were right to focus on the few sources that they could most trust, when inclined toward great doubt. Slow methodical collection and study of the sort of data of which skeptics could most approve turned out to be a big key to enabling humanity’s current levels of wealth and power.

For a while now, I’ve been exploring the following thesis: this same sort of skepticism, if extended to our social relations, can similarly allow a great extension of our “scientific” and “industrial” revolutions, making our social systems far more effective and efficient. Today, we mainly use prestige markers to select and reward the many agents who serve us, instead of more directly paying for results or following track records. If asked, many say we do this because we can’t measure results well. But as with the first scientific revolution, with work we can find ways to coordinate to measure more stable focal parameters, sufficient to let us pay for results. Let me explain.

In civilization, we don’t do everything for ourselves. We instead rely on a great many expert agents to advise us and act for us. Plumbers, cooks, bankers, fund managers, manufacturers, politicians, contractors, reporters, teachers, researchers, police, regulators, priests, doctors, lawyers, therapists, and so on. They all claim to work on our behalf. But if you will allow yourself to doubt such claims, you will find plenty of room for skepticism. Instead of being as useful as they can, why don’t they just do what is easy, or what benefits them?

We don’t pay experts like doctors or lawyers directly for results in improving our cases, and we don’t even know their track records in previous cases. But aside from a few “bad apples”, we are told that we can trust them. They are loyal to us, coming from our nation, city, neighborhood, ethnicity, gender, or political faction. Or they follow proper procedures, required by authorities.

Or, most important, they are prestigious. They went to respected schools, are affiliated with respected institutions, and satisfied demanding licensing criteria. Gossip shows us that others choose and respect them. If they misbehave then we can sue them, or regulators may punish them. (Though such events are rare.) What more could we want?

But of course prestige doesn’t obviously induce a lawyer to win our case or promote justice, nor a doctor to make us well. Or a reporter to tell us the truth. Yes, it is logically possible that selecting them on prestige happens to also max gains for us. But we rarely hear any supporting argument for such common but remarkable claims; we are just supposed to accept them because, well, prestigious people say so.

Just as our distant ancestors were too gullible (factually, if not strategically) about their sources of knowledge on the physical world around them, we today are too gullible on how much we can trust the many experts on which we rely. Oh we are quite capable of skepticism about our rivals, such as rival governments and their laws and officials. Or rival professions and their experts. Or rival suppliers within our profession. But without such rivalry, we revert to gullibility, at least regarding “our” prestigious experts who follow proper procedures.

Yes, it will take work to develop better ways to measure results, and to collect track records. (And supporting math.) But progress here also requires removing many legal obstacles. For example, trial lawyers all win or lose in public proceedings, records of which are public. Yet it is very hard to actually collect such records into a shared database; many sit in filing cabinets in dusty county courthouse basements.

Contingency fees are a way to pay lawyers for results, but they are illegal in many places. Bounty hunters are paid for results in catching fugitives, but are illegal in many places. Bail bonds give results incentives to those who choose jail versus freedom, but they are being made illegal now. And so on. Similarly, medical records are more often stored electronically, but medical ethics rules make it very hard to aggregate them, and also to use creative ways to pay doctors based on results.

I’ve written many posts on how we could work to pay more for results, and choose more based on track records. And I plan to write more. But in this post I wanted to make the key point that what should drive us in this direction is skepticism about how well we can trust our usual experts, chosen mainly for their prestige (and loyalty and procedures) and using weak payment incentives. You might feel embarrassed by such skepticism, thinking it shows you to be low status and anti-social. After all, don’t all the friendly high status popular people trust their experts?

But the ancient skeptics were right about distrusting their sources on the physical world, and following their inclination helped to create science and industry, and our vast wealth today. Continuing to follow skeptical intuitions, this time regarding our expert agents, may allow us to create and maintain far better systems of law, medicine, governance, and much more. Onward, to Science 2.0!

GD Star Rating
loading...
Tagged as: ,

The Big Change In Blame

Law is our main system of official blame; it is how we officially blame people for things. So it is a pretty big deal that, over the last few centuries, changes to law have induced big changes in who officially blames who for most things that go wrong. These changes may be having big bad effects.

Long ago most everyone could use law to blame most everyone else. Even though people were poor, the legal process was simple enough for most to use it without needing a lawyer. (Many places actually banned lawyers.) Those found liable could often be sold into slavery to pay their legal debts, and their larger family clans could also be held responsible for their debts. So basically, people blamed people, with families as guarentors.

Over the last few centuries, the legal system has become far more complex and expensive, now requiring people to pay lawyers to sue. But at the same time we’ve made it harder to get people who are found liable to pay. We don’t sell them into slavery or make their families pay, and going bankrupt has become easier and less painful. So when ordinary people suffer a harm and look for someone to sue, their lawyers usually strongly advise that they focus on any deep pockets at all related to their harm.

The law, sympathetic to their plight, has found ways to blame the rich and big firms for most everything that goes wrong. For example, these are all real examples.

  • A rape in an abandoned building is blamed on the building owner.
  • Harassment in a stadium parking lot is blamed on the stadium owner.
  • A student harming another student in an off-campus apartment is blamed on the school.
  • A post-event bad-weather auto-accident is blamed on event host for not cancelling.
  • A harm from using a product bought from a 3rd party is blamed on its manufacturer.

As ordinary people aren’t suing each other much, the government steps in to discipline ordinary folks’ behavior, via regulation and crime law. So, while once people blamed people, law now trains people to blame the rich and big business, and to expect to be blamed by government. So it maybe isn’t so strange that in the recent US Democratic presidential debates, the main parties blamed are the rich and big business. And if ordinary people are seen as doing something wrong (as with guns), regulation or crime law is assumed to be the solution.

When bad things happen in government spaces, like roads, it gets harder to find a rich person or business to blame. So on the roads we have introduced a system of requiring liability insurance, to make sure there’s a big rich business to pay if something goes wrong. As a result, on the road people blame people. That seems a healthier situation to me, and my vouching proposal would try to apply that idea much more widely, to help us return to a world where more often people blame people, rather than people blaming business or government blaming people.

GD Star Rating
loading...
Tagged as: ,

The Puzzle of Human Sacrifice

Harvey Whitehouse in New Scientist:

Today’s small-scale societies tend to favour infrequent but traumatic rituals that promote intense social cohesion – the kind that is necessary if people are to risk life and limb hunting dangerous animals together. An example would be the agonising initiation rites still carried out in the Sepik region of Papua New Guinea, involving extensive scarification of the body to resemble the skin of a crocodile, a locally revered species. …

With the advent of farming, … [and their] larger populations, … new kinds of rituals seem to have provided that shared identity. These were generally painless practices like prayer and meeting in holy places that could be performed frequently and collectively, allowing them to be duplicated across entire states or empires. …

A puzzle, however, is that many of these early civilisations also practised the brutal ritual of human sacrifice. This reached its zenith in the so-called archaic states that existed between about 3000 BC and 1000 BC, and were among the cruellest and most unequal societies ever. In some parts of the globe, human sacrifice persisted until relatively recently. The Inca religion, for example, had much in common with today’s world religions: people paid homage to their gods with frequent and, for the most part, painless ceremonies. But their rulers had divine status, their gods weren’t moralising and their rituals included human sacrifice right up until they were conquered by the Spanish in the 16th century. …

Instead of helping foster cooperation as societies expanded, Big Gods appeared only after a society had passed a threshold in complexity corresponding to a population of around a million people. … something other than Big Gods allowed societies to grow. … that something was the shift in the nature of rituals from traumatic and rare to painless and repetitive. … human sacrifice was used as a form of social control. The elites – chiefs and shamans – did the sacrificing, and the lower orders paid the price, so it maintained social stability by keeping the masses terrorised and subservient. … the practice started to decline when populations exceeded about 100,000. … 

Piecing all this together, here is what we think happened. As societies grew by means of agricultural innovation, the infrequent, traumatic rituals that had kept people together as small foraging bands gave way to frequent, painless ones. These early doctrinal religions helped unite larger, heterogeneous populations just enough to overcome the free-riding problem and ensure compliance with new forms of governance. However, in doing so they rendered them vulnerable to a new problem: power-hungry rulers. These were the despotic god-kings who presided over archaic states. Granted the divine right to command vast populations, they exploited it to raise militias and priesthoods, shoring up their power through practices we nowadays regard as cruel, such as human sacrifice and slavery. But archaic states rarely grew beyond 100,000 people because they, in turn, became internally unstable and therefore less defensible against invasion.

The societies that expanded to a million or more were those that found a new way to build cooperation – Big Gods. They demoted their rulers to the status of mortals, laid the seeds of democracy and the rule of law, and fostered a more egalitarian distribution of rights and obligations. (more)

It makes sense that complex intense rituals can only work for small societies, while larger societies need simpler rituals that everyone can see or do. It also makes sense that moralizing gods help promote cooperation. But I’m not convinced that we understand any of the rest of these patterns. The human sacrifice part seems to me especially puzzling. I can sort of see how it could serve a function, but I don’t see why that function would be especially effective in societies of population 10-100K.

GD Star Rating
loading...
Tagged as: ,

Pre-Civilization Egypt

When we look into the distant past, we often compare ourselves to ancient Greeks and Romans. But their peaks were actually closer in time to us than to the peak of the prior society that they compared themselves with: ancient Egypt.

A recent Nature paper had this dramatic graph, showing that most ancient civilizations had a key initial period of rapid increase in social complexity:

Thus in most regions, history can be divided into before and after the start of “civilization.” As writing also usually started around then, we know far less about “pre-historic” life. Those lives are even stranger to us than forager lives, as we have been returning to forager values lately as we’ve gotten rich. For example, before civilization they mostly didn’t have moralizing gods, and human sacrifice (of valued locals, not just enemies) was quite common.

The first known civilization started in Egypt, about 4800 years ago. To better see strange pre-history lives, I’ve listened to a lecture series on ancient Egypt, watched John Romer’s TV series, and read his book, A History of Ancient Egypt, Part I. Here is an interesting graph from that book:

Below the fold is a long list of what I thought were interesting quotes:  Continue reading "Pre-Civilization Egypt" »

GD Star Rating
loading...
Tagged as:

Youth As Abundance

Many technologies and business practice details have changed greatly over the last few centuries. And looking at the specifics of who did what when, much of this change looks like selection and learning. That is, people tried lots of things, some of these worked, and then others copied the winning practices. The whole pattern looks much like a hard to predict random walk.

Many cultural attitudes and values have also changed greatly over those same few centuries. However, the rate, consistency, and predictability of much of this change makes it hard to tell a similar story of selection and learning. This change instead looks more like how many of our individual human behaviors change over our lifespans – the execution of a previously developed strategy. We need not as individuals learn to explore more when young, and exploit more when old, if our genetic and cultural heritage can just tell us to make these changes.

The idea is that some key context, like wealth, has been changing steadily over the last few centuries, and our attitudes have changed steadily in response to that changing context. Just as individuals naturally change their behaviors as they age, cultures may naturally change their attitudes as they get rich. In addition to wealth, other plausibly triggering context factors include increasing health, peace, complexity, work structure, social group size, and alienation from nature.

Even if wealth isn’t the only cause, it seems a big cause, and it likely causes and it caused by other key causes. It also seems quite plausible for humanity to have learned to change our behavior in good times relative to bad times. Note that good time behavior overlaps with, but isn’t quite the same as, how individual behavior changes as individuals get rich, but their society doesn’t. The correlation between individual behavior and wealth is probably influenced a lot by selection: some behaviors tend more to produce individual wealth. Selection has less to do with how a society’s behaviors change as it gets rich.

I’ve written before on a forager vs. farmer account of attitude changes over the last few centuries. Briefly, the social pressures that turned foragers into farmers depended a lot on fear, conformity, and religion, which are complemented by poverty. As we get rich those pressures feel less compelling to us, and we less create such pressures on others. I think this forager-farmer story is helpful, but in this post I want to outline another complementary story: neoteny. One of the main ways that humans are different from other animals is our neoteny; we retrain youthful features and behaviors longer into life. This helps us to be more flexible and also learn more.

Being young is in many ways like living in a rich society. Young people have more physical energy, face less risk of physical damage, and have fewer responsibilities. Which is a lot like being rich. In a rich society you tend live longer, making you effectively younger at any given calendar age. And when young, it makes more sense to be more playful, to learn and explore new possibilities rather than just exploit old skills and possibilities, and to invest more in social connections and in showing off, such as via art, music, stories, or sport. All these also make more sense in good times, when resources are plentiful.

If living in a rich society is a lot like being young, then in makes sense to act more youthful during good times. And so humanity might have acquired the heuristic of thinking and acting more youthful in good times. And that right there can help explain a lot of changes in attitudes and behaviors over the last few centuries. I don’t think it explains quite as many as the back-to-foragers story, but it is very a priori plausible. Not that the forager story is that implausible, but still, priors matter.

From 2006 to 2009, Bruce Charlton wrote a series of articles exploring the idea that people are acting more youthful today:

A child-like flexibility of attitudes, behaviours and knowledge is probably adaptive in modern society because people need repeatedly to change jobs, learn new skills, move to new places and make new friends. (more)

Yes, the world changes more quickly in the industrial era than it did in the farming era, but that rate of change hasn’t increased much in the last century. So this one-time long-ago change in the social rate of change seems a poor explanation for the slow steady trend toward more youthful behavior we’ve seen over the last century. More neoteny as a response to increasing wealth makes more sense to me.

GD Star Rating
loading...
Tagged as: , ,

Overconfidence From Moral Signaling

Tyler Cowen in Stubborn Attachments:

The real issue is that we don’t know whether our actions today will in fact give rise to a better future, even when it appears that they will. If you ponder these time travel conundrums enough, you’ll realize that the effects of our current actions are very hard to predict,

While I think we often have good ways to guess which action is more likely to produce better outcomes, I agree with Tyler than we face great uncertainty. Once our actions get mixed up with a big complex world, it becomes quite likely that, no matter what we choose, in fact things would have turned out better had we made a different choice.

But for actions that take on a moral flavor, most people are reluctant to admit this:

If you knew enough history you’d see >10% as the only reasonable answer, for most any big historical counterfactual. But giving that answer to the above risks making you seem pro-South or pro-slavery. So most people express far more confidence. In fact, more than half give the max possible confidence!

I initially asked a similar question on if the world would have been better off overall if Nazis had won WWII, and for the first day I got very similar answers to the above. But I made the above survey on the South for one day, while I gave two days for the Nazi survey. And in its second day my Nazi survey was retweeted ~100 times, apparently attracting many actual pro-Nazis:

Yes, in principle the survey could have attracted wise historians, but the text replies to my tweet don’t support that theory. My tweet survey also attracted many people who denounced me in rude and crude ways as personally racist and pro-Nazi for even asking this question. And suggested I be fired. Sigh.

Added 13Dec: Many call my question ambiguous. Let’s use x to denote how well the world turns out. There is x0, how well the world actually turned out, and x|A, how well the world have turned out given some counterfactual assumption A. Given this terminology, I’m asking for P(x>x0|A).  You may feel sure you know x0, but you should not feel sure about  x|A; for that you should have a probability distribution.

GD Star Rating
loading...
Tagged as: , ,

Long Legacies And Fights In A Competitive Universe

My last post discussed how to influence the distant future, using a framework focused on a random uncaring universe. This is, for example, the usual framework of most who see themselves as future-oriented “effective altruists”. They see most people and institutions as not caring much about the distant future, and they themselves as unusual exceptions in three ways: 1) their unusual concern for the distant future, 2) their unusual degree of general utilitarian altruistic concern, and 3) their attention to careful reasoning on effectiveness.

If few care much or effectively about the distant future, then efforts to influence that distant future don’t much structure our world, and so one can assume that the world is structured pretty randomly compared to one’s desires and efforts to influence the distant future. For example, one need not be much concerned about the possibility that others have conflicting plans, or that they will actively try to undermine one’s plans. In that case the analysis style of my last post seems appropriate.

But it would be puzzling if such a framework were so appropriate. After all, the current world we see around us is the result of billions of years of fierce competition, a competition that can be seen as about controlling the future. In biological evolution, a fierce competition has selected species and organisms for their ability to make future organisms resemble them. More recently, within cultural evolution, cultural units (nations, languages, ethnicities, religions, regions, cities, firms, families, etc.) have been selected for their ability to make future cultural units resemble them. For example, empires have been selected for their ability to conquer neighboring regions, inducing local residents to resemble them more than they do conquered empires.

In a world of fierce competitors struggling to influence the future, it makes less sense for any one focal alliance of organism, genetic, and cultural units (“alliance” for short in the rest of this post) to assume a random uncaring universe. It instead makes more sense to ask who has been winning this contest lately, what strategies have been helping them, and what advantages this one alliance might have or could find soon to help in this competition. Competitors would search for any small edge to help them pull even a bit ahead of others, they’d look for ways to undermine rivals’ strategies, and they’d expect rivals to try to undermine their own strategies. As most alliances lose such competitions, one might be happy to find a strategy that allows one to merely stay even for a while. Yes, successful strategies sometimes have elements of altruism, but usually as ways to assert prestige or to achieve win-win coordination deals.

Furthermore, in a world of fiercely competing alliances, one might expect to have more success at future influence via joining and allying strongly with existing alliances, rather than by standing apart from them with largely independent efforts. In math there is often an equivalence between “maximize A given a constraint on B” and “maximize B given a constraint on A”, in the sense that both formulations give the same answers. In a related fashion, similar efforts to influence the future might be framed in either of two rather different ways:

  1. I’m fundamentally an altruist, trying to make the world better, though at times I choose to ally and compromise with particular available alliances.
  2. I’m fundamentally a loyal member/associate of my alliance, but I think that good ways to help it are to a) prevent the end of civilization, b) promote innovation and growth within my alliance, which indirectly helps the world grow, and c) have my alliance be seen as helping the world in a way which raises its status and reputation.

This second framing seems to have some big advantages. People who follow it may win the cooperation, support, and trust of many members of a large and powerful alliance. And such ties and supports may make it easier to become and stay motivated to continue such efforts. As I said in my last post, people seem much more motivated to join fights than to simply help the world overall. Our evolved inclinations to join alliances probably create this stronger motivation.

Of course if in fact most all substantial alliances today are actually severely neglecting the distant future, then yes it can make more sense to mostly ignore them when planning to influence the distant future, except for minor connections of convenience. But we need to ask: how strong is the evidence that in fact existing alliances greatly neglect the long run today? Yes, they typically fail to adopt policies that many advocates say would help in the long run, such as global warming mitigation. But others disagree on the value of such policies, and failures to act may also be due to failures to coordinate, rather than to a lack of concern about the long run.

Perhaps the strongest evidence of future neglect is that typical financial rates of return have long remained well above growth rates, strongly suggesting a direct discounting of future outcomes due to their distance in time. For example, these high rates of return are part of standard arguments that it will be cheaper to accommodate global warming later, rather than to prevent it today. Evolutionary finance gives us theories of what investing organizations would do when selected to take a long view, and it doesn’t match what we see very well. Wouldn’t an alliance with a long view take advantage of high rates of return to directly buy future influence on the cheap? Yes, individual humans today have to worry about limited lifespans and difficulties controlling future agents who spend their money. But these should be much less of an issue for larger cultural units. Why don’t today’s alliances save more?

Important related evidence comes from data on our largest longest-term known projects. Eight percent of global production is now spent on projects that cost over one billion dollars each. These projects tend to take many years, have consistent cost and time over-runs and benefit under-runs, and usually are net cost-benefit losers. I first heard about this from Freemon Dyson, in the “Fast is Beautiful” chapter of Infinite in All Directions. In Dyson’s experience, big slow projects are consistent losers, while fast experimentation often makes for big wins. Consider also the many large slow and failed attempts to aid poor nations.

Other related evidence include having the time when a firm builds a new HQ be a good time to sell their stock, futurists typically do badly at predicting important events even a few decades into the future, and the “rags to riches to rags in three generations” pattern whereby individuals who find ways to grow wealth don’t pass such habits on to their grandchildren.

A somewhat clear exception where alliances seem to pay short term costs to promote long run gains is in religious and ideological proselytizing. Cultural units do seem to go out of their way to indoctrinate the young, to preach to those who might convert, and to entrench prior converts into not leaving. Arguably, farming era alliances also attended to the long run when they promoted fertility and war.

So what theories do we have to explain this data? I can see three:

1) Genes Still Rule – We have good theory on why organisms that reproduce via sex discount the future. When your kids only share half of your genes, if you consider spending on yourself now versus on your kid one generation later, you discount future returns at roughly a factor of two per generation, which isn’t bad as an approximation to actual financial rates of return. So one simple theory is that even though cultural evolution happens much faster than genetic evolution, genes still remain in firm control of cultural evolution. Culture is a more effective ways for genes to achieve their purposes, but genes still set time discounts, not culture.

2) Bad Human Reasoning – While humans are impressive actors when they can use trial and error to hone behaviors, their ability to reason abstractly but reliably to construct useful long term plans is terrible. Because of agency failures, cognitive biases, incentives to show off, excess far views, overconfidence, or something else, alliances learned long ago not to trust to human long term plans, or to accumulations of resources that humans could steal. Alliances have traditionally invested in proselytizing, fertility, prestige, and war because those gains are harder for agents to mismanage or steal via theft and big bad plans.

3) Cultures Learn Slowly – Cultures haven’t yet found good general purpose mechanisms for making long term plans. In particular, they don’t trust organized groups of humans to make and execute long term plans for them, or to hold assets for them. Cultures have instead experimented with many more specific ways to promote long term outcomes, and have only found successful versions in some areas. So they seem to act with longer term views in a few areas, but mostly have not yet managed to find ways to escape the domination of genes.

I lean toward this third compromise strategy. In my next post, I’ll discuss a dramatic prediction from all this, one that can greatly influence our long-term priorities. Can you guess what I will say?

GD Star Rating
loading...
Tagged as: , , ,

Long Legacies And Fights In An Uncaring Universe

What can one do today to have a big predictable influence on the long-term future? In this post I’ll use a simple decision framework, wherein there is no game or competition, one is just trying to influence a random uncaring universe. I’ll summarize some points I’ve made before. In my next post I’ll switch to a game framework, where there is more competition to influence the future.

Most random actions fail badly at this goal. That is, most parameters are tied to some sort of physical, biological, or social equilibrium, where if you move a parameter away from its current setting, the world tends to push it back. Yes there are exceptions, where a push might “tip” the world to a new rather different equilibrium, but in spaces where most points are far from tipping points, such situations are rare.

There is, however, one robust way to have a big influence on the distant future: speed up or slow down innovation and growth. The extreme version of this preventing or causing extinction; while quite hard to do, this has enormous impact. Setting that aside, as the world economy grows exponentially, any small change to its current level is magnified over time. For example, if one invents something new that lasts then that future world is more able to make more inventions faster, etc. This magnification grows into the future until the point in time when growth rates must slow down, such as when the solar system fills up, or when innovations in physical devices run out. By speeding up growth, you can prevent the waste all the negentropy that is and will continue to be destroyed until our descendants managed to wrest control of such processes.

Alas making roughly the same future happen sooner versus later doesn’t engage most people emotionally; they are much more interested in joining a “fight” over what character the future will take at any give size. One interesting way to take sides while still leveraging growth is to fund a long-lived organization that invests and saves its assets, and then later spends those assets to influence some side in a fight. The fact that investment rates of return have long exceeded growth rates suggests that one could achieve disproportionate influence in this way. Oddly, few seem to try this strategy.

Another way to leverage growth to influence future fights is via fertility: have more kids who themselves have more kids, etc. While this is clearly a time-tested strategy, we are in an era with a puzzling disinterest in fertility, even among those who claim to seek long-term influence.

Another way to join long-term fights is to add your weight to an agglomeration process whereby larger systems slowly gain over smaller ones. For example if the nations, cities, languages, and art genres with more participants tend to win over time, you can ally with one of these to help to tip the balance. Of course this influence only lasts as long as do these things. For example, if you push for short vs long hair in the current fashion change, that effect may only last until the next hair fashion cycle.

Pushing for the creation of a particular world government seems an extreme example of this agglomeration effect. A world government might last a very long time, and retain features from those who influenced its source and early structure.

One way to have more influence on fights is to influence systems that are plastic now but will become more rigid later. This is the logic behind persuading children while they are still ignorant and gullible, before they become ignorant and stubbornly unchanging adults. Similarly one might want to influence a young but growing firm or empire. This is also the logic behind trying to be involved in setting patterns and standards during the early days of a new technology. I remember hearing people say this explicitly back when Xanadu was trying to influence the future web. People who influenced the early structure of AM radio and FAX machines had a disproportionate influence, though such influence greatly declines when such systems themselves later decline.

The farming and industrial revolutions were periods of unusual high amounts of change, and we may encounter another such revolution in a century or so. If so, it might be worth saving and collecting resources in preparation for the extra influence available during this next great revolution.

GD Star Rating
loading...
Tagged as: ,

Intellectual Status Isn’t That Different

In our world, we use many standard markers of status. These include personal connections with high status people and institutions, power, wealth, popularity, charisma, intelligence, eloquence, courage, athleticism, beauty, distinctive memorable personal styles, and participation in difficult achievements. We also use these same status markers for intellectuals, though specific fields favor specific variations. For example, in economics we favor complex game theory proofs and statistical analyses of expensive data as types of difficult achievements.

When the respected intellectuals for topic X tell the intellectual history of topic X, they usually talk about a sequence over time of positions, arguments, and insights. Particular people took positions and offered arguments (including about evidence), which taken together often resulted in insight that moved a field forward. Even if such histories do not say so directly, they give the strong impression that the people, positions, and arguments mentioned were selected for inclusion in the story because they were central to causing the field to move forward with insight. And since these mentioned people are usually the high status people in these fields, this gives the impression that the main way to gain status in these fields is to offer insight that produces progress; the implication is that correlations with other status markers are mainly due to other markers indicating who has an inclination and ability to create insight.

Long ago when I studied the history of science, I learned that these standard histories given by insiders are typically quite misleading. When historians carefully study the history of a topic area, and try to explain how opinions changed over time, they tend to credit different people, positions, and arguments. While standard histories tend to correctly describe the long term changes in overall positions, and the insights which contributed to those changes, they are more often wrong about which people and arguments caused such changes. Such histories tend to be especially wrong when they claim that a prominent figure was the first to take a position or make an argument. One can usually find lower status people who said basically the same things before. And high status accomplishments tend to be given more credit than they deserve in causing opinion change.

The obvious explanation for these errors is that we are hypocritical about what counts for status among intellectuals. We pretend that the point of intellectual fields is to produce intellectual progress, and to retain past progress in people who understand it. And as a result, we pretend that we assign status mainly based on such contributions. But in fact we mostly evaluate the status of intellectuals in the same way we evaluate most everyone, not changing our markers nearly as much as we pretend in each intellectual context. And since most of the things that contribute to status don’t strongly influence who actually offers positions and arguments that result in intellectual insight and progress, we can’t reasonably expect the people we tend to pick as high status to typically have been very central to such processes. But there’s enough complexity and ambiguity in intellectual histories to allow us to pretend that these people were very central.

What if we could make the real intellectual histories more visible, so that it became clearer who caused what changes via their positions, arguments, and insight? Well then fields would have the two usual choices for how to respond to hypocrisy exposed: raise their behaviors to meet their ideals, or lower their ideals to meet their behaviors. In the first case, the desire for status would drive much strong efforts to actually produce insights that drives progress, making plausible much faster rates of progress. In this case it could well be worth spending half of all research budgets on historians to carefully track who contributed how much. The factor of two lost in all that spending on historians might be more than compensated by intellectuals focused much more strongly on producing real insight, instead of on the usual high-status-giving imitations.

Alas I don’t expect many actual funders of intellectual activity today to be tempted by this alternative, as they also care much more about achieving status, via affiliation with high status intellectuals, than they do about producing intellectual insight and progress.

GD Star Rating
loading...
Tagged as: , ,

The Master and His Emissary

I had many reasons to want to read Iain McGilchrist’s 2009 book The Master and His Emissary.

  1. Its an ambitious big-picture book, by a smart knowledgeable polymath. I love that sort of book.
  2. I’ve been meaning to learn more about brain structure, and this book talks a lot about that.
  3. I’ve been wanting to read more literary-based critics of economics, and of sci/tech more generally.
  4. I’m interested in critiques of civilization suggesting that people were better off in less modern worlds.

This video gives an easy to watch book summary:

McGilchrist has many strong opinions on what is good and bad in the world, and on where civilization has gone wrong in history. What he mainly does in his book is to organize these opinions around a core distinction: the left vs right split in our brains. In sum: while we need both left and right brain style thinking, civilization today has gone way too far in emphasizing left styles, and that’s the main thing that’s wrong with the world today.

McGilchrist maps this core left-right brain distinction onto many dozens of other distinctions, and in each case he says we need more of the right version and less of the left. He doesn’t really argue much for why right versions are better (on the margin); he mostly sees that as obvious. So what his book mainly does is help people who agree with his values organize their thinking around a single key idea: right brains are better than left.

Here is McGilchrist’s key concept of what distinguishes left from right brain reasoning: Continue reading "The Master and His Emissary" »

GD Star Rating
loading...
Tagged as: , ,