Tag Archives: Future

World Government Risks Collective Suicide

If your mood changes every month, and if you die in any month where your mood turns to suicide, then to live 83 years you need to have one thousand months in a row where your mood doesn’t turn to suicide. Your ability to do this is aided by the fact that your mind is internally divided; while in many months part of you wants to commit suicide, it is quite rare for a majority coalition of your mind to support such an action.

In the movie Lord of the Rings, Denethor Steward of Gondor is in a suicidal mood when enemies attack the city. If not for the heroics of Gandalf, that mood might have ended his city. In the movie Dr. Strangelove, the crazed General Ripper “believes the Soviets have been using fluoridation of the American water supplies to pollute the `precious bodily fluids’ of Americans” and orders planes to start a nuclear attack, which ends badly. In many mass suicides through history, powerful leaders have been able to make whole communities commit suicide.

In a nuclear MAD situation, a nation can last unbombed only as long as no one who can “push the button” falls into a suicidal mood. Or into one of a thousand other moods that in effect lead to misjudgments and refusals to listen to reason, that eventually leads to suicide. This is a serious problem for any nuclear nation that wants to live long relative to number of people who can push the button, times the timescale on which moods change. When there are powers large enough that their suicide could take down civilization, then the risk of power suicide becomes a risk of civilization suicide. Even if the risk is low in any one year, over the long run this becomes a serious risk.

This is a big problem for world or universal government. We today coordinate on the scale of firms, cities, nations, and internationals organizations. However, the fact that we also fail to coordinate to deal with many large problems on these scales shows that we face severe limits in our coordination abilities. We also face many problems that could be aided by coordination via world government, and future civilizations will be similarly tempted by the coordination powers of central governments.

But, alas, central power risks central suicide, either done directly on purpose or as an indirect consequence of other broken thinking. In contrast, in a sufficiently decentralized world when one power commits suicide, its place and resources tend to be taken by other powers who have not committed suicide. Competition and selection is a robust long-term solution to suicide, in a way that centralized governance is not.

This is my tentative best guess for the largest future filter that we face, and that other alien civilizations have faced. The temptation to form central governments and other governance mechanisms is strong, to solve immediate coordination problems, to help powerful interests gain advantages via the capture of such central powers, and to sake the ambition thirst of those who would lead such powers. Over long periods this will seem to have been a wise choice, until suicide ends it all and no one is left to say “I told you so.”

Divide the trillions of future years over which we want to last over the increasingly short periods over which moods and sanity changes, and you see a serious problem, made worse by the lack of a sufficiently long view to make us care enough to solve it. For example, if the suicide mood of a universal government changed once a second, then it needs about 1020 non-suicide moods in a row to last a trillion years.

GD Star Rating
loading...
Tagged as: , ,

Social Media Lessons

Women consistently express more interest than men in stories about weather, health and safety, natural disasters and tabloid news. Men are more interested than women in stories about international affairs, Washington news and sports. (more)

Tabloid newspapers … tend to be simply and sensationally written and to give more prominence than broadsheets to celebrities, sports, crime stories, and even hoaxes. They also take political positions on news stories: ridiculing politicians, demanding resignations, and predicting election results. (more

Two decades ago, we knew nearly as much about computers, the internet, and the human and social sciences as we do today. In principle, this should have let us foresee broad trends in computer/internet applications to our social lives. Yet we seem to have been surprised by many aspects of today’s “social media”. We should take this as a chance to learn; what additional knowledge or insight would one have to add to our views from two decades ago to make recent social media developments not so surprising?

I asked this question Monday night on twitter and no one pointed me to existing essays on the topic; the topic seems neglected. So I’ve been pondering this for the last day. Here is what I’ve come up with.

Some people did use computers/internet for socializing twenty years ago, and those applications do have some similarities to applications today. But we also see noteworthy differences. Back then, a small passionate minority of mostly young nerdy status-aspiring men sat at desks in rare off hours to send each other text, via email and topic-organized discussion groups, as on Usenet. They tended to talk about grand big topics, like science and international politics, and were often combative and rude to each other. They avoided centralized systems to participate in many decentralized versions, using separate identities; it was hard to see how popular was any one person across all these contexts.

In today’s social media, in contrast, most everyone is involved, text is more often displaced by audio, pictures, and video, and we typically use our phones, everywhere and at all times of day. We more often forward what others have said rather than saying things ourselves, the things we forward are more opinionated and less well vetted, and are more about politics, conflict, culture, and personalities. Our social media talk is also more in these directions, is more noticeably self-promotion, and is more organized around our personal connections in more centralized systems. We have more publicly visible measures of our personal popularity and attention, and we frequently get personal affirmations of our value and connection to specific others. As we talk directly more via text than voice, and date more via apps than asking associates in person, our social interactions are more documented and separable, and thus protect us more from certain kinds of social embarrassment.

Some of these changes should have been predictable from lower costs of computing and communication. Another way to understand these changes is that the pool of participants changed, from nerdy young men to everyone. But the best organizing principle I can offer is: social media today is more lowbrow than the highbrow versions once envisioned. While over the 1800s culture separated more into low versus high brow, over the last century this has reversed, with low has been displacing high, such as in more informal clothes, pop music displacing classical, and movies displacing plays and opera. Social media is part of this trend, a trend that tech advocates, who sought higher social status for themselves and their tech, didn’t want to see.

TV news and tabloids have long been lower status than newspapers. Text has long been higher status than pictures, audio, and video. More carefully vetted news is higher status, and neutral news is higher status than opinionated rants. News about science and politics and the world is higher status that news about local culture and celebrities, which is higher status than personal gossip. Classic human norms against bragging and self-promotion reduce the status of those activities and of visible indicators of popularity and attention.

The mostly young male nerds who filled social media two decades ago and who tried to look forward envisioned high brow versions made for people like themselves. Such people like to achieve status by sparring in debates on the topics that fill high status traditional media. As they don’t like to admit they do this for status, they didn’t imagine much self-promotion or detailed tracking of individual popularity and status. And as they resented loss of privacy and strong concentrations of corporate power, and they imagined decentralized system with effectively anonymous participants.

But in fact ordinary people don’t care as much about privacy and corporate concentration, they don’t as much mind self-promotion and status tracking, they are more interested in gossip and tabloid news than high status news, they care more about loyalty than neutrality, and they care more about gaining status via personal connections than via grand-topic debate sparring. They like wrestling-like bravado and conflict, are less interested in accurate vetting of news sources, like to see frequent personal affirmations of their value and connection to specific others, and fear being seen as lower status if such things do not continue at a sufficient rate.

This high to lowbrow account suggests a key question for the future of social media: how low can we go? That is, what new low status but commonly desired social activities and features can new social media offer? One candidate that occurs to me is: salacious gossip on friends and associates. I’m not exactly sure how it can be implemented, but most people would like to share salacious rumors about associates, perhaps documented via surveillance data, in a way that allows them to gain relevant social credit from it while still protecting them from being sued for libel/slander when rumors are false (which they will often be), and at least modestly protecting them from being verifiably discovered by their rumor’s target. That is, even if a target suspects them as the source, they usually aren’t sure and can’t prove it to others. I tentatively predict that eventually someone will make a lot of money by providing such a service.

Another solid if less dramatic prediction is that as social media spreads out across the world, it will move toward the features desired by typical world citizens, relative to features desired by current social media users.

Added 17 Nov: I wish I had seen this good Arnold Kling analysis before I wrote the above.

GD Star Rating
loading...
Tagged as: , ,

Long Views Are Coming

One useful way to think about the future is to ask what key future dates are coming, and then to think about in what order they may come, in what order we want them, and how we might influence that order. Such key dates include extinction, theory of everything found, innovation runs out, exponential growth slows down, and most bio humans unemployed. Many key dates are firsts: alien life or civilization found, world government founded, off-Earth self-sufficient colony, big nuclear war, immortal born, time machine made, cheap emulations, and robots that can cheaply replace most all human workers. In this post, I want to highlight another key date, one that is arguably as important as any of the above: the day when the dominant actors take a long view.

So far history can be seen as a fierce competition by various kinds of units (including organisms, genes, and cultures) to control the distant future. Yet while this has resulted in very subtle and sophisticated behavior, almost all this behavior is focused on the short term. We see this in machine learning systems; even when they are selected to achieve end-of-game outcomes, they much prefer to do this via current behaviors that react to current stimuli. It seems to just be much harder to successfully plan on longer timescales.

Animal predators and prey developed brains to plan over short sections of a chase or fight. Human foragers didn’t plan much longer than that, and it took a lot of cultural selection to get human farmers to plan on the scale of a year, e.g., to save grain for winter eating and spring seeds. Today human organizations can consistently manage modest plans on the scale of a few years, but we fail badly when we try much larger or longer plans.

Arguably, competition and evolution will continue to select for units capable of taking longer views. And so if competition continues for long enough, eventually our world should contain units that do care about the distant future, and are capable of planning effectively over long timescales. And eventually these units should have enough of a competitive advantage to dominate.

And this seems a very good thing! Arguably, the biggest thing that goes wrong in the world today is that we fail to take a long view. Because we fail to much consider the long run in our choices, we put a vast great future at risk, such as by tolerating avoidable existential risks. This will end once the dominant units take a long view. At that point there may be fights on which direction the future should take, and coordination failures may lead to bad outcomes, but at least the future will not be neglected.

The future not being neglected seems such a wonderfully good outcome that I’m tempted to call the “Long View Day” when this starts one of the most important future dates. And working to hasten that day could be one of the most important things we can do to help the future. So I hereby call to action those who (say they) care about the distant future to help in this task.

A great feature of this task is that it doesn’t require great coordination; it is a “race to the top”. That is, it is in the interest of each cultural unit (nation, language, ethnicity, religion, city, firm, family, etc.) to figure out how to take effective long term views. So you can help the world by allying with a particular unit and helping it learn to take an effective long term view. You don’t have to choose between “selfishly” helping your unit, or helping the world as a whole.

One way to try to promote longer term views is to promote longer human lifespans. Its not that clear to me this works, however, as even immortals can prioritize the short run. And extending lifespans is very hard. But it is a fine goal in any case.

A bad way to encourage long views is to just encourage the adoption of plans that advocates now claim are effective ways to help in the long run. After all, it seems that one of the main obstacles so far to taking long views is the typical low quality of long-term plans offered. Instead, we must work to make long term planning processes more reliable.

My guess is that a key problem is worse incentives and accountability for those who make long term predictions, and who propose and implement longer term plans. If your five year plan goes wrong, that could wreck your career, but you might make a nice long comfy career our of a fifty year plan that will later go wrong. So we need to devise and test new ways to create better incentives for long term predictions and plans.

You won’t be surprised to hear me say I think prediction markets have promise as a way to create better incentives and accountability. But we haven’t experimented that much with long-term prediction markets, and they have some potential special issues, so there’s a lot of work to do to explore this approach.

Once we find ways to make more reliable long term plans, we will still face the problem that organizations are typically under the control of humans, who seem to consistently act on short term views. In my Age of Em scenario, this could be solved by having slower ems control long term choices, as they would naturally have longer term views. Absent ems, we may want to experiment with different cultural contexts for familiar ordinary humans, to see which can induce such humans to prioritize the long term.

If we can’t find contexts that make ordinary humans take long term views, we may want to instead create organizations with longer term views. One approach would be to release enough of them from tight human controls, and subject them to selection pressures that reward better long term views. For example, evolutionary finance studies what investment organizations free to reinvest all their assets would look like if they were selected for their ability to grow assets well.

Some will object to the creation of powerful entities whose preferences disagree with those of familiar humans alive at the time. And I admit that gives me pause. But if taken strictly that attitude seems to require that the future always remain neglected, if ordinary humans discount the future. I fear that may be too high a price to pay.

GD Star Rating
loading...
Tagged as: ,

Long Legacies And Fights In A Competitive Universe

My last post discussed how to influence the distant future, using a framework focused on a random uncaring universe. This is, for example, the usual framework of most who see themselves as future-oriented “effective altruists”. They see most people and institutions as not caring much about the distant future, and they themselves as unusual exceptions in three ways: 1) their unusual concern for the distant future, 2) their unusual degree of general utilitarian altruistic concern, and 3) their attention to careful reasoning on effectiveness.

If few care much or effectively about the distant future, then efforts to influence that distant future don’t much structure our world, and so one can assume that the world is structured pretty randomly compared to one’s desires and efforts to influence the distant future. For example, one need not be much concerned about the possibility that others have conflicting plans, or that they will actively try to undermine one’s plans. In that case the analysis style of my last post seems appropriate.

But it would be puzzling if such a framework were so appropriate. After all, the current world we see around us is the result of billions of years of fierce competition, a competition that can be seen as about controlling the future. In biological evolution, a fierce competition has selected species and organisms for their ability to make future organisms resemble them. More recently, within cultural evolution, cultural units (nations, languages, ethnicities, religions, regions, cities, firms, families, etc.) have been selected for their ability to make future cultural units resemble them. For example, empires have been selected for their ability to conquer neighboring regions, inducing local residents to resemble them more than they do conquered empires.

In a world of fierce competitors struggling to influence the future, it makes less sense for any one focal alliance of organism, genetic, and cultural units (“alliance” for short in the rest of this post) to assume a random uncaring universe. It instead makes more sense to ask who has been winning this contest lately, what strategies have been helping them, and what advantages this one alliance might have or could find soon to help in this competition. Competitors would search for any small edge to help them pull even a bit ahead of others, they’d look for ways to undermine rivals’ strategies, and they’d expect rivals to try to undermine their own strategies. As most alliances lose such competitions, one might be happy to find a strategy that allows one to merely stay even for a while. Yes, successful strategies sometimes have elements of altruism, but usually as ways to assert prestige or to achieve win-win coordination deals.

Furthermore, in a world of fiercely competing alliances, one might expect to have more success at future influence via joining and allying strongly with existing alliances, rather than by standing apart from them with largely independent efforts. In math there is often an equivalence between “maximize A given a constraint on B” and “maximize B given a constraint on A”, in the sense that both formulations give the same answers. In a related fashion, similar efforts to influence the future might be framed in either of two rather different ways:

  1. I’m fundamentally an altruist, trying to make the world better, though at times I choose to ally and compromise with particular available alliances.
  2. I’m fundamentally a loyal member/associate of my alliance, but I think that good ways to help it are to a) prevent the end of civilization, b) promote innovation and growth within my alliance, which indirectly helps the world grow, and c) have my alliance be seen as helping the world in a way which raises its status and reputation.

This second framing seems to have some big advantages. People who follow it may win the cooperation, support, and trust of many members of a large and powerful alliance. And such ties and supports may make it easier to become and stay motivated to continue such efforts. As I said in my last post, people seem much more motivated to join fights than to simply help the world overall. Our evolved inclinations to join alliances probably create this stronger motivation.

Of course if in fact most all substantial alliances today are actually severely neglecting the distant future, then yes it can make more sense to mostly ignore them when planning to influence the distant future, except for minor connections of convenience. But we need to ask: how strong is the evidence that in fact existing alliances greatly neglect the long run today? Yes, they typically fail to adopt policies that many advocates say would help in the long run, such as global warming mitigation. But others disagree on the value of such policies, and failures to act may also be due to failures to coordinate, rather than to a lack of concern about the long run.

Perhaps the strongest evidence of future neglect is that typical financial rates of return have long remained well above growth rates, strongly suggesting a direct discounting of future outcomes due to their distance in time. For example, these high rates of return are part of standard arguments that it will be cheaper to accommodate global warming later, rather than to prevent it today. Evolutionary finance gives us theories of what investing organizations would do when selected to take a long view, and it doesn’t match what we see very well. Wouldn’t an alliance with a long view take advantage of high rates of return to directly buy future influence on the cheap? Yes, individual humans today have to worry about limited lifespans and difficulties controlling future agents who spend their money. But these should be much less of an issue for larger cultural units. Why don’t today’s alliances save more?

Important related evidence comes from data on our largest longest-term known projects. Eight percent of global production is now spent on projects that cost over one billion dollars each. These projects tend to take many years, have consistent cost and time over-runs and benefit under-runs, and usually are net cost-benefit losers. I first heard about this from Freemon Dyson, in the “Fast is Beautiful” chapter of Infinite in All Directions. In Dyson’s experience, big slow projects are consistent losers, while fast experimentation often makes for big wins. Consider also the many large slow and failed attempts to aid poor nations.

Other related evidence include having the time when a firm builds a new HQ be a good time to sell their stock, futurists typically do badly at predicting important events even a few decades into the future, and the “rags to riches to rags in three generations” pattern whereby individuals who find ways to grow wealth don’t pass such habits on to their grandchildren.

A somewhat clear exception where alliances seem to pay short term costs to promote long run gains is in religious and ideological proselytizing. Cultural units do seem to go out of their way to indoctrinate the young, to preach to those who might convert, and to entrench prior converts into not leaving. Arguably, farming era alliances also attended to the long run when they promoted fertility and war.

So what theories do we have to explain this data? I can see three:

1) Genes Still Rule – We have good theory on why organisms that reproduce via sex discount the future. When your kids only share half of your genes, if you consider spending on yourself now versus on your kid one generation later, you discount future returns at roughly a factor of two per generation, which isn’t bad as an approximation to actual financial rates of return. So one simple theory is that even though cultural evolution happens much faster than genetic evolution, genes still remain in firm control of cultural evolution. Culture is a more effective ways for genes to achieve their purposes, but genes still set time discounts, not culture.

2) Bad Human Reasoning – While humans are impressive actors when they can use trial and error to hone behaviors, their ability to reason abstractly but reliably to construct useful long term plans is terrible. Because of agency failures, cognitive biases, incentives to show off, excess far views, overconfidence, or something else, alliances learned long ago not to trust to human long term plans, or to accumulations of resources that humans could steal. Alliances have traditionally invested in proselytizing, fertility, prestige, and war because those gains are harder for agents to mismanage or steal via theft and big bad plans.

3) Cultures Learn Slowly – Cultures haven’t yet found good general purpose mechanisms for making long term plans. In particular, they don’t trust organized groups of humans to make and execute long term plans for them, or to hold assets for them. Cultures have instead experimented with many more specific ways to promote long term outcomes, and have only found successful versions in some areas. So they seem to act with longer term views in a few areas, but mostly have not yet managed to find ways to escape the domination of genes.

I lean toward this third compromise strategy. In my next post, I’ll discuss a dramatic prediction from all this, one that can greatly influence our long-term priorities. Can you guess what I will say?

GD Star Rating
loading...
Tagged as: , , ,

Long Legacies And Fights In An Uncaring Universe

What can one do today to have a big predictable influence on the long-term future? In this post I’ll use a simple decision framework, wherein there is no game or competition, one is just trying to influence a random uncaring universe. I’ll summarize some points I’ve made before. In my next post I’ll switch to a game framework, where there is more competition to influence the future.

Most random actions fail badly at this goal. That is, most parameters are tied to some sort of physical, biological, or social equilibrium, where if you move a parameter away from its current setting, the world tends to push it back. Yes there are exceptions, where a push might “tip” the world to a new rather different equilibrium, but in spaces where most points are far from tipping points, such situations are rare.

There is, however, one robust way to have a big influence on the distant future: speed up or slow down innovation and growth. The extreme version of this preventing or causing extinction; while quite hard to do, this has enormous impact. Setting that aside, as the world economy grows exponentially, any small change to its current level is magnified over time. For example, if one invents something new that lasts then that future world is more able to make more inventions faster, etc. This magnification grows into the future until the point in time when growth rates must slow down, such as when the solar system fills up, or when innovations in physical devices run out. By speeding up growth, you can prevent the waste all the negentropy that is and will continue to be destroyed until our descendants managed to wrest control of such processes.

Alas making roughly the same future happen sooner versus later doesn’t engage most people emotionally; they are much more interested in joining a “fight” over what character the future will take at any give size. One interesting way to take sides while still leveraging growth is to fund a long-lived organization that invests and saves its assets, and then later spends those assets to influence some side in a fight. The fact that investment rates of return have long exceeded growth rates suggests that one could achieve disproportionate influence in this way. Oddly, few seem to try this strategy.

Another way to leverage growth to influence future fights is via fertility: have more kids who themselves have more kids, etc. While this is clearly a time-tested strategy, we are in an era with a puzzling disinterest in fertility, even among those who claim to seek long-term influence.

Another way to join long-term fights is to add your weight to an agglomeration process whereby larger systems slowly gain over smaller ones. For example if the nations, cities, languages, and art genres with more participants tend to win over time, you can ally with one of these to help to tip the balance. Of course this influence only lasts as long as do these things. For example, if you push for short vs long hair in the current fashion change, that effect may only last until the next hair fashion cycle.

Pushing for the creation of a particular world government seems an extreme example of this agglomeration effect. A world government might last a very long time, and retain features from those who influenced its source and early structure.

One way to have more influence on fights is to influence systems that are plastic now but will become more rigid later. This is the logic behind persuading children while they are still ignorant and gullible, before they become ignorant and stubbornly unchanging adults. Similarly one might want to influence a young but growing firm or empire. This is also the logic behind trying to be involved in setting patterns and standards during the early days of a new technology. I remember hearing people say this explicitly back when Xanadu was trying to influence the future web. People who influenced the early structure of AM radio and FAX machines had a disproportionate influence, though such influence greatly declines when such systems themselves later decline.

The farming and industrial revolutions were periods of unusual high amounts of change, and we may encounter another such revolution in a century or so. If so, it might be worth saving and collecting resources in preparation for the extra influence available during this next great revolution.

GD Star Rating
loading...
Tagged as: ,

Great Filter, 20 Years On

Twenty years ago today, I introduced the phrase “The Great Filter” in an essay on my personal website. Today Google says 300,000 web pages use this phrase, and 4.3% of those mention my name. This essay has 45 academic citations, and my related math paper has 17 cites.

These citations are a bit over 1% of my total citations, but this phrase accounts for 5% of my press coverage. This press is mostly dumb luck. I happened to coin a phrase on a topic of growing and wide interest, yet others more prestigious than I didn’t (as they often do) bother to replace it with another phrase that would trace back to them.

I have mixed feelings about writing the paper. Back then I was defying the usual academic rule to focus narrowly. I was right that it is possible to contribute to many more different areas than most academics do. But what I didn’t fully realize is that to academic economists non-econ publications don’t exist, and that publication is only the first step to academic influence. If you aren’t around in an area to keep publishing, giving talks, going to meetings, doing referee reports, etc., academics tend to correctly decide that you are politically powerless and thus you and your work can safely be ignored.

So I’m mostly ignored by the academics who’ve continued in this area – don’t get grants, students, or invitations to give talks, to comment on paper drafts, or to referee papers, grants, books, etc. The only time I’ve ever been invited to talk on the subject was a TEDx talk a few years ago. (And I’ve given over 350 talks in my career.) But the worst scenario of being ignored is that it is as if your paper never existed, and so you shouldn’t have bothered writing it. Thankfully I have avoided that outcome, as some of my insights have been taken to heart, both academically and socially. People now accept that finding independent alien life simpler than us would be bad news, that the very hard filter steps should be roughly equally spaced in our history, and that the great filter gives a reason to worry about humanity’s future prospects.

GD Star Rating
loading...
Tagged as: , ,

Spaceship Earth Explores Culture Space

Space: the final frontier. These are the voyages of the starship Enterprise. Its five-year mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before. (more)

Many love science fiction stories of brave crews risking their lives to explore strange new spaces, stories much like the older adventure stories about European explorers risking their lives centuries ago to explore new places on Earth. (Yes, often to conquer and enslave the locals.) Many lament that we don’t have as many real such explorer stories today, and they say that we should support more human space exploration now in order to create such real heroic exploration stories. Even though human space exploration is crazy expensive now, and offers few scientific, economic, or humanity-survival gains anytime soon. They say the good stories will be worth all that cost.

Since Henry George first invoked it in 1879, many have used the metaphor of Spaceship Earth to call attention to our common vulnerability and limited resources:

Spaceship Earth … is a world view encouraging everyone on Earth to act as a harmonious crew working toward the greater good. … “we must all cooperate and see to it that everyone does his fair share of the work and gets his fair share of the provisions” … “We travel together, passengers on a little space ship, dependent on its vulnerable reserves of air and soil.” (more)

In this post, I want to suggest that Spaceship Earth is in fact a story of a brave crew risking much to explore a strange new territory. But the space we explore is more cultural than physical.

During the industrial era, the world economy has doubled roughly every fifteen years. Each such doubling of output has moved us into new uncharted cultural territory. This growth has put new pressures on our environment, and has resulted in large and rapid changes to our culture and social organization.

This growth results mostly from innovation, and most innovations are small and well tested against local conditions, giving us little reason to doubt their local value. But all these small changes add up to big overall moves that are often entangled with externalities, coordination failures, and other reasons to doubt their net value.

So humanity continues to venture out into new untried and risky cultural spaces, via changes to cultural conditions with which we don’t have much experience, and which thus risk disaster and destruction. The good crew of Spaceship Earth should carefully weigh these risks when considering where and how fast to venture.

Consider seven examples:

  1. While humans seem to be adapting reasonably well to global warming, we risk big lumpy disruptive changes to Atlantic currents and Antarctic ice. Ecosystems also seem to be adapting okay, but we are risking big collapses to them as well.
  2. While ancient societies gave plenty of status and rewards to fertility, today high fertility behaviors are mostly seen as low status. This change is entwined with complex changes in gender norms and roles, but one result is that human fertility is falling toward below replacement in much of the world, and may fall much further. Over centuries this might produce a drastic decrease in world population, and productivity-threatening decreases in the scale of world production.
  3. While the world has become much more peaceful over the last century, this has been accompanied by big declines in cultural support for military action and tolerance for military losses. Is the world now more vulnerable to conquest by a new military power with more local cultural support and tolerance for losses?
  4. Farmer era self-control and self-discipline has weakened over time, in part via weaker religion. This has weakened cultural support for work and cultural suspicion of self-indulgence in sex, drugs, and media. So we now see less work and more drug addiction. How far will we slide?
  5. Via new media, we are exploring brave new worlds of how to make friends, form identities, achieve status, and learn about the world. As many have noted, these new ways risk many harms to happiness and social capital.
  6. Innovation was once greatly aided by tinkering, i.e., the ability to take apart and change familiar devices. Such tinkering is much less feasible in modern devices. Increasing regulation and risk aversion is also interfering with innovation. Are we as a result risking cultural support for innovation?
  7. Competition between firms has powered rapid growth, but winning bets on intangible capital is allowing leading firms to increasingly dominate industries. Does this undermine the competition that we’ve relied on so far to power growth?

The most common framing today for such issues is one of cultural war. You ask yourself which side feels right to you, commiserate with your moral allies, then puff yourself up with righteous indignation against those who see things differently, and go to war with them. But we might do better to frame these as reasonable debates on how much to risk as we explore culture space.

In a common scene from exploration stories, a crew must decide if to take a big risk. Or choose among several risks. Some in the crew see a risk as worth the potential reward, while others want to search longer for better options, or retreat to try again another day. They may disagree on the tradeoff, but they all agree that both the risks and the rewards are real. It is just a matter of tradeoff details.

We might similarly frame key “value” debates as reasonable differing judgements on what chances to take as spaceship Earth explores culture space. Those who love new changes could admit that we are taking some chances in adopting them so quickly, with so little data to go on, while those who are suspicious of recent changes could admit that many seem to like their early effects. Rather than focus on directly evaluating changes, we might focus more on setting up tracking systems to watch for potential problems, and arranging for repositories of old culture practices that might help us to reverse changes if things go badly. And we might all see ourselves as part of a grand heroic adventure story, wherein a mostly harmonious crew explores a great strange cosmos of possible cultures.

GD Star Rating
loading...
Tagged as: , ,

If The Future Is Big

One way to predict the future is to find patterns in the past, and extend them into the future. And across the very long term history of everything, the one most robust pattern I see is: growth. Biology, and then humanity, has consistently grown in ability, capacity, and influence. Yes, there have been rare periods of widespread decline, but overall in the long run there has been far more growth than decline. 

We have good reasons to expect growth. Most growth is due to innovation, and once learned many innovations are hard to unlearn. Yes there have been some big widespread declines in history, such as the medieval Black Death and the decline of the Roman and Chinese empires at about the same time. But the historians who study the biggest such declines see them as surprisingly large, not surprisingly small. Knowing the details of those events, they would have been quite surprised to see such declines be ten times larger than as seen. Yes it is possible in principle that we’ve been lucky and most planets or species that start out like ours went totally extinct. But if smaller declines are more common than bigger ones, the lack of big but not total declines in our history suggests that the chances of extinction level declines was low. 

Yes, we should worry about the possibility of a big future decline soon. Perhaps due to global warming, resource exhaustion, falling fertility, or institutional rot. But this is mainly because the consequences would be so dire, not because such declines are likely. Even declines comparable in magnitude to the largest seen in history do not seem to me remotely sufficient to prevent the revival of long term growth afterward, as they do not prevent continued innovation. Thus while long-term growth is far from inevitable, it seems the most likely scenario to consider.

If growth is our most robust expectation for the future, what does that growth suggest or imply? The rest of this post summarizes many such plausible implications. There far more of them than many realize. 

Before I list the implications, consider an analogy. Imagine that you lived in a small mountain village, but that a huge city lie down in the valley below. While it might be hard to see or travel to that city, the existence of that city might still change your mountain village life in many important  ways. A big future can be like that big city to the village that is our current world. Now for those implications:   Continue reading "If The Future Is Big" »

GD Star Rating
loading...
Tagged as:

Future Influence Is Hard

Imagine that one thousand years ago you had a rough idea of the most likely overall future trajectory of civilization. For example, that an industrial revolution was likely in the next few millenia. Even with that unusual knowledge, you would find it quite hard to take concrete actions back then to substantially change the course of future civilization. You might be able to mildly improve the chances for your family, or perhaps your nation. And even then most of your levers of influence would focus on improving events in the next few years or decades, not millenia in the future.

One thousand years ago wasn’t unusual in this regard. At most any place-time in history it would have been quite hard to substantially influence the future of civilization, and most of your influence levers would focus on events in the next few decades.

Today, political activists often try to motivate voters by claiming that the current election is the most important one in a generation. They say this far more often than once per generation. But they’ve got nothing on futurists, who often say individuals today can have substantial influence over the entire future of the universe. From a recent Singularity Weblog podcast  where Socrates interviews Max Tegmark:

Tegmark: I don’t think there’s anything inevitable about the human future. We are in a very unstable situation where its quite clear that it could go in several different directions. The greatest risk of all we face with AI and the future of technology is complacency, which comes from people saying things are inevitable. What’s the one greatest technique of psychological warfare? Its to convince people “its inevitable; you’re screwed.” … I want to do exactly the opposite with my book, I want to make people feel empowered, and realize that this is a unique moment after 13.8 billions years of history, when we, people who are alive on this planet now, can actually make a spectacular difference for the future of life, not just on this planet, but throughout much of the cosmos. And not just for the next election cycle, but for billions of years. And the greatest risk is that people start believing that something is inevitable, and just don’t put in their best effort. There’s no better way to fail than to convince yourself that it doesn’t matter what you do.

Socrates: I actually also had a debate with Robin Hanson on my show because in his book the Age of Em he started by saying basically this is how is going to be, more or less. And I told him, I told him I totally disagree with you because it could be a lot worse or it could be a lot better. And it all depends on what we are going to do right now. But you are kind of saying this is how things are going to be. And he’s like yeah because you extrapolate. …

Tegmark: That’s another great example. I mean Robin Hanson is a very creative guy and its a very thought provoking book, I even wrote a blurb for it. But we can’t just say that’s how its going to be, because he even says himself that the Age of Em will only last for two years from the outside perspective. And our universe is going to be around for billions of years more. So surely we should put effort into making sure the rest becomes as great as possible too, shouldn’t we.

Socrates: Yes, agreed. (44:25-47:10)

Either individuals have always been able to have a big influence on the future universe, contrary to my claims above, or today is quite unusual. In which case we need concrete arguments for why today is so different.

Yes, it is possible to underestimate our influence, but surely it is also possible to overestimate that.  I see no nefarious psychological warfare agency working to induce underestimation, but instead see great overestimation due to value signaling.

Most people don’t think much about the long term future, but when they do far more of them see the future as hard to foresee than hard to influence. Most groups who discuss the long term future focus on which kinds of overall outcomes would most achieve their personal values; they pay far less attention to how concretely one might induce such outcomes. This serves the function of letting people using future talk as a way to affirm their values, but overestimates influence.

My predictions in Age of Em are given the key assumption of ems as the first machines able to replace most all human labor. I don’t say influence is impossible, but instead say individual influence is most likely quite minor, and so should focus on choosing small variations on the most likely scenarios one can identify.

We are also quite unlikely to have long term influence that isn’t mediated by intervening events. If you can’t think of way to influence an Age of Em, if that happens, you are even less likely to influence ages that would follow it.

GD Star Rating
loading...
Tagged as:

Two Types of Future Filters

In principle, any piece of simple dead matter in the universe could give rise to simple life, then to advanced life, then to an expanding visible civilization. In practice, however, this has not yet happened anywhere in the visible universe. The “great filter” is sum total of all the obstacles that prevent this transition, and our observation of a dead universe tells us that this filter must be enormous.

Life and humans here on Earth have so far progressed some distance along this filter, and we now face the ominous question: how much still lies ahead? If the future filter is large, our changes of starting an expanding visible civilization are slim. While being interviewed on the great filter recently, I was asked what I see as the most likely future filter. And in trying to answer, I realized that I have changed my mind.

The easiest kind of future filter to imagine is a big external disaster that kills all life on Earth. Like a big asteroid or nearby supernovae. But when you think about it, it is very hard to kill all life on Earth. Given how long Earth as gone without such an event, the odds of it happening in the next millions years seems quite small. And yet a million years seems plenty of time for us to start an expanding visible civilization, if we were going to do that.

Yes, compared to killing all life, we can far more easily imagine events that destroy civilization, or kill all humans. But the window for Earth to support life apparently extends another 1.5 billion years into our future. As that window duration should roughly equal the typical duration between great filter steps in the past, it seems unlikely that any such steps have occurred since a half billion years ago, when multicellular life started becoming visible in the fossil record. For example, the trend toward big brains seems steady enough over that period to make big brains unlikely as a big filter step.

Thus even a disaster that kills most all multicellular life on Earth seems unlikely to push life back past the most recent great filter step. Life would still likely retain sex, Eukaryotes, and much more. And with 1.5 billion years to putter, life seems likely to revive multicellular animals, big brains, and something as advanced as humans. In which case there would be a future delay of advanced expanding life, but not a net future filter.

Yes, this analysis is regarding “try-try” filter steps, where the world can just keep repeatedly trying until it succeeds. In principle there can also be “first or never” steps, such as standards that could in principle go many ways, but which lock in forever once they pick a particular way. But it still seems hard to imagine such steps in the last half billion years.

So far we’ve talked about big disasters due to external causes. And yes, big internal disasters like wars are likely to be more frequent. But again the problem is: a disaster that still leaves enough life around could evolve advanced life again in 1.5 billion years, resulting in only a delay, not a filter.

The kinds of disasters we’ve been considering so far might be described as “too little coordination” disasters. That is, you might imagine empowering some sort of world government to coordinate to prevent them. And once such a government became possible, if it were not actually created or used, you might blame such a disaster in part on our failing to empower a world government to prevent them.

Another class of disasters, however, might be described as “too much coordination” disasters. In these scenarios, a powerful world government (or equivalent global coalition) actively prevents life from expanding visibly into the universe. And it continues to do so for as long as life survives. This government might actively prevent the development of technology that would allow such a visible expansion, or it might allow such technology but prevent its application to expansion.

For example, a world government limited to our star system might fear becoming eclipsed by interstellar colonists. It might fear that colonists would travel so far away as to escape the control of our local world government, and then they might collectively grow to become more powerful than the world government around our star.

Yes, this is not a terribly likely scenario, and it does seem hard to imagine such a lockdown lasting for as long as does advanced civilization capable of traveling to other stars. But then scenarios where all life on Earth gets killed off also seem pretty unlikely. It isn’t at all obvious to me that the too little coordination disasters are more likely than the too much coordination disasters.

And so I conclude that I should be in-the-ballpark-of similarly worried about both categories of disaster scenarios. Future filters could result from either too little or too much coordination. To prevent future filters, I don’t know if it is better to have more or less world government.

GD Star Rating
loading...
Tagged as: , ,