Tag Archives: Future

Great Filter, 20 Years On

Twenty years ago today, I introduced the phrase “The Great Filter” in an essay on my personal website. Today Google says 300,000 web pages use this phrase, and 4.3% of those mention my name. This essay has 45 academic citations, and my related math paper has 17 cites.

These citations are a bit over 1% of my total citations, but this phrase accounts for 5% of my press coverage. This press is mostly dumb luck. I happened to coin a phrase on a topic of growing and wide interest, yet others more prestigious than I didn’t (as they often do) bother to replace it with another phrase that would trace back to them.

I have mixed feelings about writing the paper. Back then I was defying the usual academic rule to focus narrowly. I was right that it is possible to contribute to many more different areas than most academics do. But what I didn’t fully realize is that to academic economists non-econ publications don’t exist, and that publication is only the first step to academic influence. If you aren’t around in an area to keep publishing, giving talks, going to meetings, doing referee reports, etc., academics tend to correctly decide that you are politically powerless and thus you and your work can safely be ignored.

So I’m mostly ignored by the academics who’ve continued in this area – don’t get grants, students, or invitations to give talks, to comment on paper drafts, or to referee papers, grants, books, etc. The only time I’ve ever been invited to talk on the subject was a TEDx talk a few years ago. (And I’ve given over 350 talks in my career.) But the worst scenario of being ignored is that it is as if your paper never existed, and so you shouldn’t have bothered writing it. Thankfully I have avoided that outcome, as some of my insights have been taken to heart, both academically and socially. People now accept that finding independent alien life simpler than us would be bad news, that the very hard filter steps should be roughly equally spaced in our history, and that the great filter gives a reason to worry about humanity’s future prospects.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Spaceship Earth Explores Culture Space

Space: the final frontier. These are the voyages of the starship Enterprise. Its five-year mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before. (more)

Many love science fiction stories of brave crews risking their lives to explore strange new spaces, stories much like the older adventure stories about European explorers risking their lives centuries ago to explore new places on Earth. (Yes, often to conquer and enslave the locals.) Many lament that we don’t have as many real such explorer stories today, and they say that we should support more human space exploration now in order to create such real heroic exploration stories. Even though human space exploration is crazy expensive now, and offers few scientific, economic, or humanity-survival gains anytime soon. They say the good stories will be worth all that cost.

Since Henry George first invoked it in 1879, many have used the metaphor of Spaceship Earth to call attention to our common vulnerability and limited resources:

Spaceship Earth … is a world view encouraging everyone on Earth to act as a harmonious crew working toward the greater good. … “we must all cooperate and see to it that everyone does his fair share of the work and gets his fair share of the provisions” … “We travel together, passengers on a little space ship, dependent on its vulnerable reserves of air and soil.” (more)

In this post, I want to suggest that Spaceship Earth is in fact a story of a brave crew risking much to explore a strange new territory. But the space we explore is more cultural than physical.

During the industrial era, the world economy has doubled roughly every fifteen years. Each such doubling of output has moved us into new uncharted cultural territory. This growth has put new pressures on our environment, and has resulted in large and rapid changes to our culture and social organization.

This growth results mostly from innovation, and most innovations are small and well tested against local conditions, giving us little reason to doubt their local value. But all these small changes add up to big overall moves that are often entangled with externalities, coordination failures, and other reasons to doubt their net value.

So humanity continues to venture out into new untried and risky cultural spaces, via changes to cultural conditions with which we don’t have much experience, and which thus risk disaster and destruction. The good crew of Spaceship Earth should carefully weigh these risks when considering where and how fast to venture.

Consider seven examples:

  1. While humans seem to be adapting reasonably well to global warming, we risk big lumpy disruptive changes to Atlantic currents and Antarctic ice. Ecosystems also seem to be adapting okay, but we are risking big collapses to them as well.
  2. While ancient societies gave plenty of status and rewards to fertility, today high fertility behaviors are mostly seen as low status. This change is entwined with complex changes in gender norms and roles, but one result is that human fertility is falling toward below replacement in much of the world, and may fall much further. Over centuries this might produce a drastic decrease in world population, and productivity-threatening decreases in the scale of world production.
  3. While the world has become much more peaceful over the last century, this has been accompanied by big declines in cultural support for military action and tolerance for military losses. Is the world now more vulnerable to conquest by a new military power with more local cultural support and tolerance for losses?
  4. Farmer era self-control and self-discipline has weakened over time, in part via weaker religion. This has weakened cultural support for work and cultural suspicion of self-indulgence in sex, drugs, and media. So we now see less work and more drug addiction. How far will we slide?
  5. Via new media, we are exploring brave new worlds of how to make friends, form identities, achieve status, and learn about the world. As many have noted, these new ways risk many harms to happiness and social capital.
  6. Innovation was once greatly aided by tinkering, i.e., the ability to take apart and change familiar devices. Such tinkering is much less feasible in modern devices. Increasing regulation and risk aversion is also interfering with innovation. Are we as a result risking cultural support for innovation?
  7. Competition between firms has powered rapid growth, but winning bets on intangible capital is allowing leading firms to increasingly dominate industries. Does this undermine the competition that we’ve relied on so far to power growth?

The most common framing today for such issues is one of cultural war. You ask yourself which side feels right to you, commiserate with your moral allies, then puff yourself up with righteous indignation against those who see things differently, and go to war with them. But we might do better to frame these as reasonable debates on how much to risk as we explore culture space.

In a common scene from exploration stories, a crew must decide if to take a big risk. Or choose among several risks. Some in the crew see a risk as worth the potential reward, while others want to search longer for better options, or retreat to try again another day. They may disagree on the tradeoff, but they all agree that both the risks and the rewards are real. It is just a matter of tradeoff details.

We might similarly frame key “value” debates as reasonable differing judgements on what chances to take as spaceship Earth explores culture space. Those who love new changes could admit that we are taking some chances in adopting them so quickly, with so little data to go on, while those who are suspicious of recent changes could admit that many seem to like their early effects. Rather than focus on directly evaluating changes, we might focus more on setting up tracking systems to watch for potential problems, and arranging for repositories of old culture practices that might help us to reverse changes if things go badly. And we might all see ourselves as part of a grand heroic adventure story, wherein a mostly harmonious crew explores a great strange cosmos of possible cultures.

GD Star Rating
a WordPress rating system
Tagged as: , ,

If The Future Is Big

One way to predict the future is to find patterns in the past, and extend them into the future. And across the very long term history of everything, the one most robust pattern I see is: growth. Biology, and then humanity, has consistently grown in ability, capacity, and influence. Yes, there have been rare periods of widespread decline, but overall in the long run there has been far more growth than decline. 

We have good reasons to expect growth. Most growth is due to innovation, and once learned many innovations are hard to unlearn. Yes there have been some big widespread declines in history, such as the medieval Black Death and the decline of the Roman and Chinese empires at about the same time. But the historians who study the biggest such declines see them as surprisingly large, not surprisingly small. Knowing the details of those events, they would have been quite surprised to see such declines be ten times larger than as seen. Yes it is possible in principle that we’ve been lucky and most planets or species that start out like ours went totally extinct. But if smaller declines are more common than bigger ones, the lack of big but not total declines in our history suggests that the chances of extinction level declines was low. 

Yes, we should worry about the possibility of a big future decline soon. Perhaps due to global warming, resource exhaustion, falling fertility, or institutional rot. But this is mainly because the consequences would be so dire, not because such declines are likely. Even declines comparable in magnitude to the largest seen in history do not seem to me remotely sufficient to prevent the revival of long term growth afterward, as they do not prevent continued innovation. Thus while long-term growth is far from inevitable, it seems the most likely scenario to consider.

If growth is our most robust expectation for the future, what does that growth suggest or imply? The rest of this post summarizes many such plausible implications. There far more of them than many realize. 

Before I list the implications, consider an analogy. Imagine that you lived in a small mountain village, but that a huge city lie down in the valley below. While it might be hard to see or travel to that city, the existence of that city might still change your mountain village life in many important  ways. A big future can be like that big city to the village that is our current world. Now for those implications:   Continue reading "If The Future Is Big" »

GD Star Rating
a WordPress rating system
Tagged as:

Future Influence Is Hard

Imagine that one thousand years ago you had a rough idea of the most likely overall future trajectory of civilization. For example, that an industrial revolution was likely in the next few millenia. Even with that unusual knowledge, you would find it quite hard to take concrete actions back then to substantially change the course of future civilization. You might be able to mildly improve the chances for your family, or perhaps your nation. And even then most of your levers of influence would focus on improving events in the next few years or decades, not millenia in the future.

One thousand years ago wasn’t unusual in this regard. At most any place-time in history it would have been quite hard to substantially influence the future of civilization, and most of your influence levers would focus on events in the next few decades.

Today, political activists often try to motivate voters by claiming that the current election is the most important one in a generation. They say this far more often than once per generation. But they’ve got nothing on futurists, who often say individuals today can have substantial influence over the entire future of the universe. From a recent Singularity Weblog podcast  where Socrates interviews Max Tegmark:

Tegmark: I don’t think there’s anything inevitable about the human future. We are in a very unstable situation where its quite clear that it could go in several different directions. The greatest risk of all we face with AI and the future of technology is complacency, which comes from people saying things are inevitable. What’s the one greatest technique of psychological warfare? Its to convince people “its inevitable; you’re screwed.” … I want to do exactly the opposite with my book, I want to make people feel empowered, and realize that this is a unique moment after 13.8 billions years of history, when we, people who are alive on this planet now, can actually make a spectacular difference for the future of life, not just on this planet, but throughout much of the cosmos. And not just for the next election cycle, but for billions of years. And the greatest risk is that people start believing that something is inevitable, and just don’t put in their best effort. There’s no better way to fail than to convince yourself that it doesn’t matter what you do.

Socrates: I actually also had a debate with Robin Hanson on my show because in his book the Age of Em he started by saying basically this is how is going to be, more or less. And I told him, I told him I totally disagree with you because it could be a lot worse or it could be a lot better. And it all depends on what we are going to do right now. But you are kind of saying this is how things are going to be. And he’s like yeah because you extrapolate. …

Tegmark: That’s another great example. I mean Robin Hanson is a very creative guy and its a very thought provoking book, I even wrote a blurb for it. But we can’t just say that’s how its going to be, because he even says himself that the Age of Em will only last for two years from the outside perspective. And our universe is going to be around for billions of years more. So surely we should put effort into making sure the rest becomes as great as possible too, shouldn’t we.

Socrates: Yes, agreed. (44:25-47:10)

Either individuals have always been able to have a big influence on the future universe, contrary to my claims above, or today is quite unusual. In which case we need concrete arguments for why today is so different.

Yes, it is possible to underestimate our influence, but surely it is also possible to overestimate that.  I see no nefarious psychological warfare agency working to induce underestimation, but instead see great overestimation due to value signaling.

Most people don’t think much about the long term future, but when they do far more of them see the future as hard to foresee than hard to influence. Most groups who discuss the long term future focus on which kinds of overall outcomes would most achieve their personal values; they pay far less attention to how concretely one might induce such outcomes. This serves the function of letting people using future talk as a way to affirm their values, but overestimates influence.

My predictions in Age of Em are given the key assumption of ems as the first machines able to replace most all human labor. I don’t say influence is impossible, but instead say individual influence is most likely quite minor, and so should focus on choosing small variations on the most likely scenarios one can identify.

We are also quite unlikely to have long term influence that isn’t mediated by intervening events. If you can’t think of way to influence an Age of Em, if that happens, you are even less likely to influence ages that would follow it.

GD Star Rating
a WordPress rating system
Tagged as:

Two Types of Future Filters

In principle, any piece of simple dead matter in the universe could give rise to simple life, then to advanced life, then to an expanding visible civilization. In practice, however, this has not yet happened anywhere in the visible universe. The “great filter” is sum total of all the obstacles that prevent this transition, and our observation of a dead universe tells us that this filter must be enormous.

Life and humans here on Earth have so far progressed some distance along this filter, and we now face the ominous question: how much still lies ahead? If the future filter is large, our changes of starting an expanding visible civilization are slim. While being interviewed on the great filter recently, I was asked what I see as the most likely future filter. And in trying to answer, I realized that I have changed my mind.

The easiest kind of future filter to imagine is a big external disaster that kills all life on Earth. Like a big asteroid or nearby supernovae. But when you think about it, it is very hard to kill all life on Earth. Given how long Earth as gone without such an event, the odds of it happening in the next millions years seems quite small. And yet a million years seems plenty of time for us to start an expanding visible civilization, if we were going to do that.

Yes, compared to killing all life, we can far more easily imagine events that destroy civilization, or kill all humans. But the window for Earth to support life apparently extends another 1.5 billion years into our future. As that window duration should roughly equal the typical duration between great filter steps in the past, it seems unlikely that any such steps have occurred since a half billion years ago, when multicellular life started becoming visible in the fossil record. For example, the trend toward big brains seems steady enough over that period to make big brains unlikely as a big filter step.

Thus even a disaster that kills most all multicellular life on Earth seems unlikely to push life back past the most recent great filter step. Life would still likely retain sex, Eukaryotes, and much more. And with 1.5 billion years to putter, life seems likely to revive multicellular animals, big brains, and something as advanced as humans. In which case there would be a future delay of advanced expanding life, but not a net future filter.

Yes, this analysis is regarding “try-try” filter steps, where the world can just keep repeatedly trying until it succeeds. In principle there can also be “first or never” steps, such as standards that could in principle go many ways, but which lock in forever once they pick a particular way. But it still seems hard to imagine such steps in the last half billion years.

So far we’ve talked about big disasters due to external causes. And yes, big internal disasters like wars are likely to be more frequent. But again the problem is: a disaster that still leaves enough life around could evolve advanced life again in 1.5 billion years, resulting in only a delay, not a filter.

The kinds of disasters we’ve been considering so far might be described as “too little coordination” disasters. That is, you might imagine empowering some sort of world government to coordinate to prevent them. And once such a government became possible, if it were not actually created or used, you might blame such a disaster in part on our failing to empower a world government to prevent them.

Another class of disasters, however, might be described as “too much coordination” disasters. In these scenarios, a powerful world government (or equivalent global coalition) actively prevents life from expanding visibly into the universe. And it continues to do so for as long as life survives. This government might actively prevent the development of technology that would allow such a visible expansion, or it might allow such technology but prevent its application to expansion.

For example, a world government limited to our star system might fear becoming eclipsed by interstellar colonists. It might fear that colonists would travel so far away as to escape the control of our local world government, and then they might collectively grow to become more powerful than the world government around our star.

Yes, this is not a terribly likely scenario, and it does seem hard to imagine such a lockdown lasting for as long as does advanced civilization capable of traveling to other stars. But then scenarios where all life on Earth gets killed off also seem pretty unlikely. It isn’t at all obvious to me that the too little coordination disasters are more likely than the too much coordination disasters.

And so I conclude that I should be in-the-ballpark-of similarly worried about both categories of disaster scenarios. Future filters could result from either too little or too much coordination. To prevent future filters, I don’t know if it is better to have more or less world government.

Added: After a two month civility pause, I wrote a long detailed post on this topic.

GD Star Rating
a WordPress rating system
Tagged as: , ,

More Than Death, Fear Decay

Most known “systems” decay, rot, age, and die. We usually focus on the death part, but the more fundamental problem is decay (a.k.a. rotting, aging). Death is almost inevitable, as immortality is extremely difficult to achieve. Systems that don’t decay can still die; we sometimes see systems where the chance of death stays constant over time. But for most complex systems, the chance of death rises with time, due to decay.

Many simple physical systems, like chairs, decay because the materials of their parts decay. Such systems can often be rejuvenated by replacing those materials. More generally, simple modular systems can be rejuvenated by replacing the modular parts that decay. For example, it is possible to spend enough to maintain most cars and buildings indefinitely in a nearly original condition, though we rarely see this as worth the bother.

Complex adaptive systems (CAS), such as firms, have many parts in complex relations, relations that change in an attempt to adapt to changing conditions. When a CAS changes its design and structure to adapt, however, this rarely results in modular sub-designs that can be swapped out. Alas, the designs of most known CAS decay as they adapt. In biological organisms this is called “aging”, in software it is called “rot”, and in product design this is called the “innovators dilemma”. Human brains change from having “fluid” to “crystalized” intelligence, and machine learning systems trained in one domain usually find it harder to learn quite different domains. We also see aging in production plans, firms, empires, and legal systems. I don’t know of data on whether things like cities, nations, professions, disciplines, languages, sports, or art genres age. But it isn’t obvious that they don’t also decay.

It is not just that it is easier to create and train new CAS, relative to rejuvenating old ones. It seems more that we just don’t know how to prevent rot at any remotely reasonable cost. In software, designers often try to “refactor” their systems to slow the process of aging. And sometimes such designers report that they’ve completely halted aging. But these exceptions are mostly in systems that are small and simple, with stable environments, or with crazy amounts of redesign effort.

However, I think we can see at least one clear exception to this pattern of rotting CAS: some generalist species. If the continually changing environment of Earth caused all species to age at similar rates, then over the history of life on Earth we would see a consistent trend toward a weaker ability of life to adapt to changing conditions. Eventually life would lose its ability to sufficient adapt, and life would die out. If some kinds of life could survive in a few very slowly changing garden environments, then eventually all life would descend from the stable species that waited unchanging in those few gardens. The longer it had been since a species had descended from a stable garden species, the faster that species would die out.

But that isn’t what we see. Instead, while species that specialize to particular environments do seem to go extinct more easily, generalist species seem to maintain their ability to adapt across eons, even after making a great many adaptations. Somehow, the designs of generalist species do not seem to rot, even though typical organisms within that species do rot. How do they do that?

It is possible that biological evolution has discovered some powerful design principles of which we humans are still ignorant. If so, then eventually we may learn how to cheaply make CAS that don’t rot. But in this case, why doesn’t evolution use those anti-rot design principles to create individual organisms that don’t decay or age? Evolution seems to judge it much more cost effective to make individual organisms that rot. A more likely hypothesis is that there is no cheap way to prevent rot; evolution has just continually paid a large cost to prevent rot. Perhaps early on, some species didn’t pay this cost, and won for a while. But eventually they died from rot, leaving only non-rotting species to inherit the Earth. It seems there must be some level in a system that doesn’t rot, if it is to last over the eons, and selection has ensured that the life we now see has such a level.

If valid, this perspective suggests a few implications for the future of life and civilization. First, we should seriously worry about which aspects of our modern civilization system are rotting. Human culture has lasted a million years, but many parts of our modern world are far younger. If the first easiest version of a system that we can find to do something is typically be a rotting system, and if it takes a lots more work to find a non-rotting version, should we presume that most of the new systems we have are rotting versions? Farming-era empires consistently rotted; how sure can we be that our world-wide industry-era empire isn’t similarly rotting today? We may be accumulating a technical debt that will be expensive to repay. Law and regulation seem to be rotting; should we try to induce a big refactoring there? Should we try to create and preserve contrarian subcultures or systems that are less likely to crash with the dominant culture and system?

Second, we should realize that it may be harder than we thought to switch to a non-biological future. We humans are now quite tied to the biosphere, and would quickly die if biology were to die. But we have been slowly building systems that are less closely tied to biology. We have been digging up materials in mines, collecting energy directly from atoms and the Sun, and making things in factories. And we’ve started to imagine a future where the software in our brains is copied into factory-made hardware, i.e., ems, joined there by artificial software. At which point our descendants might no longer depending on biological systems. But replacing biological systems with our typically rotting artificial systems may end badly. And making artificial systems that don’t rot may be a lot more expensive and time-consuming that we’ve anticipated.

Some imagine that we will soon discover a simple powerful general learning algorithm, which will enable us to make a superintelligence, a super-smart hyper-consistent eternal mind with no internal conflicts and an arbitrary abilities to indefinitely improve itself, make commitments, and preserve its values. This mind would then rule the universe forever more, at least until it met its alien equivalent. I expect that these visions have not sufficiently considered system rot, among other issues.

In my first book I guessed that during the age of em, individual ems would become fragile over time, and after a few subjective centuries they’d need to be replaced by copies of fresh scans of young humans. I also guessed that eventually it would become possible to substantially redesign brains, and that the arrival of this ability might herald the start of the next age after the age of em. If this requires figuring out how to make non-rotting versions of these new systems, the age of em might last even longer than one would otherwise guess.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Prediction Machines

One of my favorite books of the dotcom era was Information Rules, by Shapiro and Varian in 1998. At the time, tech boosters were saying that all the old business rules were obsolete, and anyone who disagreed “just doesn’t get it.” But Shapiro and Varian showed in detail how to understand the new internet economy in terms of standard economic concepts. They were mostly right, and Varian went on to become Google’s chief economist.

Today many tout a brave new AI-driven economic revolution, with some touting radical change. For example, a widely cited 2013 paper said:

47% of total US employment is in the high risk category … potentially automatable over … perhaps a decade or two.

Five years later, we haven’t yet seen changes remotely this big. And a new book is now a worthy successor to Information Rules:

In Prediction Machines, three eminent economists recast the rise of AI as a drop in the cost of prediction. With this single, masterful stroke, they lift the curtain on the AI-is-magic hype and show how basic tools from economics provide clarity about the AI revolution and a basis for action by CEOs, managers, policy makers, investors, and entrepreneurs.

As with Information Rules, these authors mostly focus on guessing the qualitative implications of such prediction machines. That is, they don’t say much about likely rates or magnitudes of change, but instead use basic economic analysis to guess likely directions of change. (Many example quotes below.) And I can heartily endorse almost all of these good solid guesses about change directions. A change in the cost of prediction is a fine way to frame recent tech advances, and if you want to figure out what they imply for your line of business, this is the book for you.

However, the book does at times go beyond estimating impact directions. It says “this time is different”, suggests “extraordinary changes over the next few years”, says an AI-induced recession might result from a burst of new tech, and the eventual impact of this tech will be similar to that of computers in general so far:

Everyone has had or will soon have an AI moment. We are accustomed to a media saturated with stories of new technologies that will change our lives. … Almost all of us are so used the the constant drumbeat of technology news that we numbly recite that the only thing immune to change is change itself. Until have our AI moment. Then we realize that this technology is different. p.2

In various ways, prediction machines can “use language, form abstractions and concepts, solve the kinds of problem now [as of 1955] reserve for humans, and improve themselves.” We do not speculate on whether this process heralds the arrival of general artificial intelligence, “the Singularity”, or Skynet. However, as you will see, this narrower focus on prediction still suggests extraordinary changes over the next few years. Just as cheap arithmetic enabled by computers proved powerful in using in dramatic change in business and personal lives, similar transformations will occur due to cheap prediction. p.39

Once an AI is better than humans at a particular task, job losses well happen quickly. We can be confident that new jobs will arise with a few ears and people will have something to do, but that will be little comfort for those looking for work and waiting for those new jobs to appear. An AI-induced recession is not out of the question. p.212

And they offer a motivating example that would require pretty advanced tech:

At some point, as it turns the knob, the AI’s prediction accuracy crosses a threshold, changing Amazon’s business model. The prediction becomes sufficiently accurate that it becomes more profitable for Amazon to ship you the goods that it predicts you will want rather than wait for you to order them. p.16

I can’t endorse any of these suggestions about magnitudes and rates of change. I estimate much smaller and slower change. But the book doesn’t argue for any of these claims, it more assumes them, and so I won’t bother to argue the topic here either. The book only mentions radical scenarios a few more times:

But is this time different? Hawking’s concern, shared by many, is that this time might be unusual because AI may squeeze out the last remaining advantages humans have over machines. How might an economist approach this question? … If you favor free trade between countries, then you … support developing AI, even if it replaces some jobs. Decades of research into the effect of trade show that other jobs will appear, and overall employment will not plummet. p.211

For years, economists have faced criticism that the agents on which we see our theories are hyper-rational and unrealistic models of human behavior. True enough, but when it comes to superintelligence, that means we have glen on the right track. … Thus economics provides a powerful way to understand how a society of superintelligent AIs will evolve. p.222

Yes, research is underway to make prediction machines work in broader settings, but the break-through that will give rise to general artificial intelligence remains undiscovered. Some believe that AGI is so far out that we should not spend cycles worrying about it. … As with many AI-related issues, the future is highly uncertain. Is this the end of the world as we know it? not yet, but it is the end of this book. Companies are deploying AIs right now. In applying the simple economics that underpin lower-cost prediction and higher-value complements to prediction, your business can make ROI-optimizing choices and strategic decision with regard to AI. When we move beyond prediction machines to general artificial intelligence or even superintelligence, whatever that may be, then we will be at a different AI moment. That is something everyone agrees upon. p.223

As you can see, they don’t see radical scenarios as coming soon, nor see much urgency regarding them. A stance I’m happy to endorse. And I also endorse all those insightful qualitative change estimates, as illustrated by these samples: Continue reading "Prediction Machines" »

GD Star Rating
a WordPress rating system
Tagged as: , ,

How Best Help Distant Future?

I greatly enjoyed Charles Mann’s recent book The Wizard and the Prophet. It contained the following stat, which I find to be pretty damning of academia:

Between 1970 and 1989, more than three hundred academic studies of the Green Revolution appeared. Four out of five were negative. p.437

Mann just did a related TED talk, which I haven’t seen, and posted this related article:

The basis for arguing for action on climate change is the belief that we have a moral responsibility to people in the future. But this is asking one group of people to make wrenching changes to help a completely different set of people to whom they have no tangible connection. Indeed, this other set of people doesn’t exist. There is no way to know what those hypothetical future people will want.

Picture Manhattan Island in the 17th century. Suppose its original inhabitants, the Lenape, could determine its fate, in perfect awareness of future outcomes. In this fanciful situation, the Lenape know that Manhattan could end up hosting some of the world’s great storehouses of culture. All will give pleasure and instruction to countless people. But the Lenape also know that creating this cultural mecca will involve destroying a diverse and fecund ecosystem. I suspect the Lenape would have kept their rich, beautiful homeland. If so, would they have wronged the present?

Economists tend to scoff at these conundrums, saying they’re just a smokescreen for “paternalistic” intellectuals and social engineers “imposing their own value judgments on the rest of the world.” (I am quoting the Harvard University economist Martin Weitzman.) Instead, one should observe what people actually do — and respect that. In their daily lives, people care most about the next few years and don’t take the distant future into much consideration. …

Usually economists use 5 percent as a discount rate — for every year of waiting, the price goes down 5 percent, compounded. … The implications for climate change are both striking and, to many people, absurd: at a 5 percent discount rate, economist Graciela Chichilnisky has calculated, “the present value of the earth’s aggregate output discounted 200 years from now is a few hundred thousand dollars.” … Chichilnisky, a major figure in the IPCC, has argued that this kind of thinking is not only ridiculous but immoral; it exalts a “dictatorship of the present” over the future.

Economists could retort that people say they value the future, but don’t act like it, even when the future is their own. And it is demonstrably true that many — perhaps most — men and women don’t set aside for retirement, buy sufficient insurance, or prepare their wills. If people won’t make long-term provisions for their own lives, why should we expect people to bother about climate change for strangers many decades from now? …

In his book, Scheffler discusses Children of Men … The premise of both book and film is that humanity has become infertile, and our species is stumbling toward extinction. … Our conviction that life is worth living is “more threatened by the prospect of humanity’s disappearance than by the prospect of our own deaths,” Scheffler writes. The idea is startling: the existence of hypothetical future generations matters more to people than their own existence. What this suggests is that, contrary to economists, the discount rate accounts for only part of our relationship to the future. People are concerned about future generations. But trying to transform this general wish into specific deeds and plans is confounding. We have a general wish for action but no experience working on this scale, in this time-frame. …

Overall, climate change asks us to reach for higher levels on the ladder of concern. If nothing else, the many misadventures of foreign aid have shown how difficult it is for even the best-intentioned people from one culture to know how to help other cultures. Now add in all the conundrums of working to benefit people in the future, and the hurdles grow higher. Thinking of all the necessary actions across the world, decade upon decade — it freezes thought. All of which indicates that although people are motivated to reach for the upper rungs, our efforts are more likely to succeed if we stay on the lower, more local rungs.

I side with economists here. The fact that we can relate emotionally to Children of Men hardly shows that people would actually react as it depicts. Fictional reactions often differ greatly from real ones. And I’m skeptical of Mann’s theory that we really do care greatly about helping the distant future, but are befuddled by the cognitive complexity of the task. Consider two paths to helping the distant future:

  1. Lobby via media and politics for collective strategies to prevent global warming now.
  2. Save resources personally now to be spent later to accommodate any problems then.

The saving path seems much less cognitively demanding than the lobby path, and in fact quite feasible cognitively. Resources will be useful later no matter what are the actual future problems and goals. Yes, the saving path faces agency costs, to control distant future folks tasked with spending your savings. But the lobby path also has agency costs, to control government as an agent.

Yes, the value of the saving path relative to the lobby path is reduced to the degree that prevention is cheaper than accommodation, or collective action more effective than personal action. But the value of the saving path increases enormously with time, as investments typically grow about 5% per year. And cognitive complexity costs of the lobby path also increase exponentially with time, as it becomes harder to foresee the problems and values of the distant future. (Ems wouldn’t be grateful for your global warming prevention, for example.)

Wait long enough to help and the relative advantage of the saving path should become overwhelming. So the fact that we see far more interest in the lobby path, relative to the savings path, really does suggest that people just don’t care that much about the distant future, and that global warning concern is a smokescreen for other policy agendas. No matter how many crocodile tears people shed regarding fictional depictions.

Added 5a: The posited smokescreen motive would be hidden, and perhaps unconscious.

Added 6p: I am told that in a half dozen US it is cheap to create trusts and foundations that can accumulate assets for centuries, and then turn to helping with problems then, all without paying income or capital gains taxes on the accumulating assets.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Like the Ancients, We Have Gods. They’ll Get Greater.

Here’s a common story about gods. Our distant ancestors didn’t understand the world very well, and their minds contained powerful agent detectors. So they came to see agents all around them, such as in trees, clouds, mountains, and rivers. As these natural things vary enormously in size and power, our ancestors had to admit that such agents varied greatly in size and power. The big ones were thus “gods”, and to be feared. While our forager ancestors were fiercely egalitarian, and should thus naturally resent the existence of gods, gods were at least useful in limiting status ambitions of local humans; however big you were, you weren’t as big as gods. All-seeing powerful gods were also useful in enforcing norms; norm violators could expect to be punished by such gods.

However, once farming era war, density, and capital accumulation allowed powerful human rulers, these rulers co-opted gods to enforce their rule. Good gods turned bad. Rulers claimed the support of gods, or claimed to be gods themselves, allowing their decrees to take priority over social norms. However, now that we (mostly) know that there just isn’t a spirit world, and now that we can watch our rulers much more closely, we know that our rulers are mere humans without the support of gods. So we much less tolerate strong rulers, their claims of superiority, or their norm violations. Yay us.

There are some problems with this story, however. Until the Axial revolution of about 3500 years ago, most gods were local to a social group. For our forager ancestors, this made them VERY local, and thus typically small. Such gods cared much more that you show them loyalty than what you believed, and they weren’t very moralizing. Most gods had limited power; few were all-powerful, all-knowing, and immortal. People mostly had enough data to see that their rulers did not have vast personal powers. And finally, rather than reluctantly submitting to gods out of fear, we have long seen people quite eager to worship, praise, and idolize gods, and also their leaders, apparently greatly enjoying the experience.

Here’s a somewhat different story. Long before they became humans, our ancestors deeply craved both personal status, and also personal association with others who have the high status. This is ancient animal behavior. Forager egalitarian norms suppressed these urges, via emphasizing the also ancient envy and resentment of the high status. Foragers came to distinguish dominance, the bad status that forces submission via power, from prestige, the good status that invites you to learn and profit by watching and working with them. As part of their larger pattern of hidden motives, foragers often pretended that they liked leaders for their prestige, even when they really also accepted and even liked their dominance.

Once foragers believed in spirits, they also wanted to associate with high status spirits. Spirits increased the supply of high status others to associate with, which people liked. But foragers also preferred to associated with local spirits, to show local loyalties. With farming, social groups became larger, and status ambitions could also rise. Egalitarian norms were suppressed. So there came a demand for larger gods, encompassing the larger groups.

In this story the fact that ancient gods were spirits who could sometimes violate ordinary physical rules was incidental, not central. The key driving force was a desire to associate with high status others. The ability to violate physical rules did confer status, but it wasn’t a different kind of status than that held by powerful humans. So very powerful humans who claimed to be gods weren’t wrong, in terms of the essential dynamic. People were eager to worship and praise both kinds of gods, for similar reasons.

Thus today even if we don’t believe in spirts, we can still have gods, if we have people who can credibly acquire very high status, via prestige or dominance. High enough to induce not just grudging admiration, but eager and emotionally-unreserved submission and worship. And we do in fact have such people. We have people who are the best in the world at the abilities that the ancients would recognize for status, such as physical strength and coordination, musical or story telling ability, social savvy, and intelligence. And in addition, technology and social complexity offer many new ways to be impressive. We can buy impressive homes, clothes, and plastic surgery, and travel at impressive speeds via impressive vehicles. We can know amazing things about the universe, and about our social world, via science and surveillance.

So we today do in fact have gods, in effect if not in name. (Though actors who play gods on screen can be seen as ancient-style gods.) The resurgence of forager values in the industrial era makes us reluctant to admit it, but a casual review of celebrity culture makes it very clear, I’d say. Yes, we mostly admit that our celebrities don’t have supernatural powers, but that doesn’t much detract from the very high status that they have achieved, or our inclination to worship them.

While it isn’t obviously the most likely scenario, one likely and plausible future scenario that has been worked out in unusual detail is the em scenario, as discussed in my book Age of Em. Ems would acquire many more ways to be individually impressive, acquiring more of the features that made the mythical ancient gods so impressive. Ems could be immortal, occupy many powerful and diverse physical bodies, move around the world at the speed of light, think very very fast, have many copies, and perhaps even somewhat modify their brains to expand each copy’s mental capacity. Automation assistants could expand their abilities even more.

As most ems are copies of the few hundred most productive ems, there are enormous productivity differences among typical ems. By any reasonable measure, status would vary enormously. Some would be gods relative to others. Not just in a vague metaphorical sense, but in a deep gut-grabbing emotional sense. Humans, and ems, will deeply desire to associate with them, via praise, worship and more.

Our ancestors had gods, we have gods, and our descendants will like have even greater more compelling gods. The phenomena of gods is quite far from dead.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Toward Micro-Likes

Long ago when electricity and phones were new, they were largely unregulated, and privately funded. But then as the tech (and especially the interfaces) stopped changing so fast, and showed big scale and network economies, regulation stepped in. Today social media still seems new. But as it hasn’t been changing as much lately, and it also shows large scale and network economies, many are talking now about heavier regulation. In this post, let me suggest that a lot more change is possible; we aren’t near the sort of stability that electricity and phones reached when they became heavily regulated.

Back in the early days of the web and internet people predicted many big radical changes. Yet few then mentioned social media, the application now most strongly associated with this new frontier. What did we miss? The usual story, which I find plausible, is that we missed just how much people love to get many frequent signals of their social connections: likes, retweets, etc. Social media gives us more frequent “attaboy” and “we see & like you” signals. People care more than we realized about the frequency, relative to the size, of such signals.

But if that’s the key lesson, social media should be able to move a lot further in this direction. For example, today Facebook has two billion monthly users and produces four million likes per minute, for an average of about three likes per day per monthly user. Twitter has 300 million monthly users, who send 500 million tweets per day, for less than two tweets per day per monthly user. (I can’t find stats on Twitter likes or retweets.) Which I’d say is actually a pretty low rate of positive feedback.

Imagine you had a wall-sized screen, full of social media items, and that while you browsed this wall the direction of your gaze was tracked continuously to see which items your gaze was on or near. From that info, one could give the authors or subjects of those items far more granular info on who is paying how much attention to them. Not only on how often how much your stuff is watched, but also on the mood and mental state of those watchers. If some of those items were continuous video feeds from other people, then those others could be producing many more social media items to which others could attend.

Also, so far we’ve usually just naively counted likes, retweets, etc., as if everyone counted the same. But we could instead use non-uniform weights based on popularity or other measures. And given how much people like to participate in synchronized rituals, we could also create and publicize statistics on what groups of people are how synchronized in their social media actions. And offer new tools to help them synchronize more finely.

My point here isn’t to predict or recommend specific changes for future social media. I’m instead just trying to make the point that a lot of room for improvement remains. Such gains might be delayed or prevented by heavy regulation.

GD Star Rating
a WordPress rating system
Tagged as: , ,