Tag Archives: Future

Future Influence Is Hard

Imagine that one thousand years ago you had a rough idea of the most likely overall future trajectory of civilization. For example, that an industrial revolution was likely in the next few millenia. Even with that unusual knowledge, you would find it quite hard to take concrete actions back then to substantially change the course of future civilization. You might be able to mildly improve the chances for your family, or perhaps your nation. And even then most of your levers of influence would focus on events in the next few years or decades, not millenia in the future.

One thousand years ago wasn’t unusual in this regard. At most any place-time in history it would have been quite hard to substantially influence the future of civilization, and most of your influence levers would focus on events in the next few decades.

Today, political activists often try to motivate voters by claiming that the current election is the most important one in a generation. They say this far more often than once per generation. But they’ve got nothing on futurists, who often say individuals today can have substantial influence over the entire future of the universe. From a recent Singularity Weblog podcast  where Socrates interviews Max Tegmark:

Tegmark: I don’t think there’s anything inevitable about the human future. We are in a very unstable situation where its quite clear that it could go in several different directions. The greatest risk of all we face with AI and the future of technology is complacency, which comes from people saying things are inevitable. What’s the one greatest technique of psychological warfare? Its to convince people “its inevitable; you’re screwed.” … I want to do exactly the opposite with my book, I want to make people feel empowered, and realize that this is a unique moment after 13.8 billions years of history, when we, people who are alive on this planet now, can actually make a spectacular difference for the future of life, not just on this planet, but throughout much of the cosmos. And not just for the next election cycle, but for billions of years. And the greatest risk is that people start believing that something is inevitable, and just don’t put in their best effort. There’s no better way to fail than to convince yourself that it doesn’t matter what you do.

Socrates: I actually also had a debate with Robin Hanson on my show because in his book the Age of Em he started by saying basically this is how is going to be, more or less. And I told him, I told him I totally disagree with you because it could be a lot worse or it could be a lot better. And it all depends on what we are going to do right now. But you are kind of saying this is how things are going to be. And he’s like yeah because you extrapolate. …

Tegmark: That’s another great example. I mean Robin Hanson is a very creative guy and its a very thought provoking book, I even wrote a blurb for it. But we can’t just say that’s how its going to be, because he even says himself that the Age of Em will only last for two years from the outside perspective. And our universe is going to be around for billions of years more. So surely we should put effort into making sure the rest becomes as great as possible too, shouldn’t we.

Socrates: Yes, agreed. (44:25-47:10)

Either individuals have always been able to have a big influence on the future universe, contrary to my claims above, or today is quite unusual. In which case we need concrete arguments for why today is so different.

Yes, it is possible to underestimate our influence, but surely it is also possible to overestimate that.  I see no nefarious psychological warfare agency working to induce underestimation, but instead see great overestimation due to value signaling.

Most people don’t think much about the long term future, but when they do far more of them see the future as hard to foresee than hard to influence. Most groups who discuss the long term future focus on which kinds of overall outcomes would most achieve their personal values; they pay far less attention to how concretely one might induce such outcomes. This serves the function of letting people using future talk as a way to affirm their values, but overestimates influence.

My predictions in Age of Em are given the key assumption of ems as the first machines able to replace most all human labor. I don’t say influence is impossible, but instead say individual influence is most likely quite minor, and so should focus on choosing small variations on the most likely scenarios one can identify.

We are also quite unlikely to have long term influence that isn’t mediated by intervening events. If you can’t think of way to influence an Age of Em, if that happens, you are even less likely to influence ages that would follow it.

GD Star Rating
a WordPress rating system
Tagged as:

Two Types of Future Filters

In principle, any piece of simple dead matter in the universe could give rise to simple life, then to advanced life, then to an expanding visible civilization. In practice, however, this has not yet happened anywhere in the visible universe. The “great filter” is sum total of all the obstacles that prevent this transition, and our observation of a dead universe tells us that this filter must be enormous.

Life and humans here on Earth have so far progressed some distance along this filter, and we now face the ominous question: how much still lies ahead? If the future filter is large, our changes of starting an expanding visible civilization are slim. While being interviewed on the great filter recently, I was asked what I see as the most likely future filter. And in trying to answer, I realized that I have changed my mind.

The easiest kind of future filter to imagine is a big external disaster that kills all life on Earth. Like a big asteroid or nearby supernovae. But when you think about it, it is very hard to kill all life on Earth. Given how long Earth as gone without such an event, the odds of it happening in the next millions years seems quite small. And yet a million years seems plenty of time for us to start an expanding visible civilization, if we were going to do that.

Yes, compared to killing all life, we can far more easily imagine events that destroy civilization, or kill all humans. But the window for Earth to support life apparently extends another 1.5 billion years into our future. As that window duration should roughly equal the typical duration between great filter steps in the past, it seems unlikely that any such steps have occurred since a half billion years ago, when multicellular life started becoming visible in the fossil record. For example, the trend toward big brains seems steady enough over that period to make big brains unlikely as a big filter step.

Thus even a disaster that kills most all multicellular life on Earth seems unlikely to push life back past the most recent great filter step. Life would still likely retain sex, Eukaryotes, and much more. And with 1.5 billion years to putter, life seems likely to revive multicellular animals, big brains, and something as advanced as humans. In which case there would be a future delay of advanced expanding life, but not a net future filter.

Yes, this analysis is regarding “try-try” filter steps, where the world can just keep repeatedly trying until it succeeds. In principle there can also be “first or never” steps, such as standards that could in principle go many ways, but which lock in forever once they pick a particular way. But it still seems hard to imagine such steps in the last half billion years.

So far we’ve talked about big disasters due to external causes. And yes, big internal disasters like wars are likely to be more frequent. But again the problem is: a disaster that still leaves enough life around could evolve advanced life again in 1.5 billion years, resulting in only a delay, not a filter.

The kinds of disasters we’ve been considering so far might be described as “too little coordination” disasters. That is, you might imagine empowering some sort of world government to coordinate to prevent them. And once such a government became possible, if it were not actually created or used, you might blame such a disaster in part on our failing to empower a world government to prevent them.

Another class of disasters, however, might be described as “too much coordination” disasters. In these scenarios, a powerful world government (or equivalent global coalition) actively prevents life from expanding visibly into the universe. And it continues to do so for as long as life survives. This government might actively prevent the development of technology that would allow such a visible expansion, or it might allow such technology but prevent its application to expansion.

For example, a world government limited to our star system might fear becoming eclipsed by interstellar colonists. It might fear that colonists would travel so far away as to escape the control of our local world government, and then they might collectively grow to become more powerful than the world government around our star.

Yes, this is not a terribly likely scenario, and it does seem hard to imagine such a lockdown lasting for as long as does advanced civilization capable of traveling to other stars. But then scenarios where all life on Earth gets killed off also seem pretty unlikely. It isn’t at all obvious to me that the too little coordination disasters are more likely than the too much coordination disasters.

And so I conclude that I should be in-the-ballpark-of similarly worried about both categories of disaster scenarios. Future filters could result from either too little or too much coordination. To prevent future filters, I don’t know if it is better to have more or less world government.

GD Star Rating
a WordPress rating system
Tagged as: , ,

More Than Death, Fear Decay

Most known “systems” decay, rot, age, and die. We usually focus on the death part, but the more fundamental problem is decay (a.k.a. rotting, aging). Death is almost inevitable, as immortality is extremely difficult to achieve. Systems that don’t decay can still die; we sometimes see systems where the chance of death stays constant over time. But for most complex systems, the chance of death rises with time, due to decay.

Many simple physical systems, like chairs, decay because the materials of their parts decay. Such systems can often be rejuvenated by replacing those materials. More generally, simple modular systems can be rejuvenated by replacing the modular parts that decay. For example, it is possible to spend enough to maintain most cars and buildings indefinitely in a nearly original condition, though we rarely see this as worth the bother.

Complex adaptive systems (CAS), such as firms, have many parts in complex relations, relations that change in an attempt to adapt to changing conditions. When a CAS changes its design and structure to adapt, however, this rarely results in modular sub-designs that can be swapped out. Alas, the designs of most known CAS decay as they adapt. In biological organisms this is called “aging”, in software it is called “rot”, and in product design this is called the “innovators dilemma”. Human brains change from having “fluid” to “crystalized” intelligence, and machine learning systems trained in one domain usually find it harder to learn quite different domains. We also see aging in production plans, firms, empires, and legal systems. I don’t know of data on whether things like cities, nations, professions, disciplines, languages, sports, or art genres age. But it isn’t obvious that they don’t also decay.

It is not just that it is easier to create and train new CAS, relative to rejuvenating old ones. It seems more that we just don’t know how to prevent rot at any remotely reasonable cost. In software, designers often try to “refactor” their systems to slow the process of aging. And sometimes such designers report that they’ve completely halted aging. But these exceptions are mostly in systems that are small and simple, with stable environments, or with crazy amounts of redesign effort.

However, I think we can see at least one clear exception to this pattern of rotting CAS: some generalist species. If the continually changing environment of Earth caused all species to age at similar rates, then over the history of life on Earth we would see a consistent trend toward a weaker ability of life to adapt to changing conditions. Eventually life would lose its ability to sufficient adapt, and life would die out. If some kinds of life could survive in a few very slowly changing garden environments, then eventually all life would descend from the stable species that waited unchanging in those few gardens. The longer it had been since a species had descended from a stable garden species, the faster that species would die out.

But that isn’t what we see. Instead, while species that specialize to particular environments do seem to go extinct more easily, generalist species seem to maintain their ability to adapt across eons, even after making a great many adaptations. Somehow, the designs of generalist species do not seem to rot, even though typical organisms within that species do rot. How do they do that?

It is possible that biological evolution has discovered some powerful design principles of which we humans are still ignorant. If so, then eventually we may learn how to cheaply make CAS that don’t rot. But in this case, why doesn’t evolution use those anti-rot design principles to create individual organisms that don’t decay or age? Evolution seems to judge it much more cost effective to make individual organisms that rot. A more likely hypothesis is that there is no cheap way to prevent rot; evolution has just continually paid a large cost to prevent rot. Perhaps early on, some species didn’t pay this cost, and won for a while. But eventually they died from rot, leaving only non-rotting species to inherit the Earth. It seems there must be some level in a system that doesn’t rot, if it is to last over the eons, and selection has ensured that the life we now see has such a level.

If valid, this perspective suggests a few implications for the future of life and civilization. First, we should seriously worry about which aspects of our modern civilization system are rotting. Human culture has lasted a million years, but many parts of our modern world are far younger. If the first easiest version of a system that we can find to do something is typically be a rotting system, and if it takes a lots more work to find a non-rotting version, should we presume that most of the new systems we have are rotting versions? Farming-era empires consistently rotted; how sure can we be that our world-wide industry-era empire isn’t similarly rotting today? We may be accumulating a technical debt that will be expensive to repay. Law and regulation seem to be rotting; should we try to induce a big refactoring there? Should we try to create and preserve contrarian subcultures or systems that are less likely to crash with the dominant culture and system?

Second, we should realize that it may be harder than we thought to switch to a non-biological future. We humans are now quite tied to the biosphere, and would quickly die if biology were to die. But we have been slowly building systems that are less closely tied to biology. We have been digging up materials in mines, collecting energy directly from atoms and the Sun, and making things in factories. And we’ve started to imagine a future where the software in our brains is copied into factory-made hardware, i.e., ems, joined there by artificial software. At which point our descendants might no longer depending on biological systems. But replacing biological systems with our typically rotting artificial systems may end badly. And making artificial systems that don’t rot may be a lot more expensive and time-consuming that we’ve anticipated.

Some imagine that we will soon discover a simple powerful general learning algorithm, which will enable us to make a superintelligence, a super-smart hyper-consistent eternal mind with no internal conflicts and an arbitrary abilities to indefinitely improve itself, make commitments, and preserve its values. This mind would then rule the universe forever more, at least until it met its alien equivalent. I expect that these visions have not sufficiently considered system rot, among other issues.

In my first book I guessed that during the age of em, individual ems would become fragile over time, and after a few subjective centuries they’d need to be replaced by copies of fresh scans of young humans. I also guessed that eventually it would become possible to substantially redesign brains, and that the arrival of this ability might herald the start of the next age after the age of em. If this requires figuring out how to make non-rotting versions of these new systems, the age of em might last even longer than one would otherwise guess.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Prediction Machines

One of my favorite books of the dotcom era was Information Rules, by Shapiro and Varian in 1998. At the time, tech boosters were saying that all the old business rules were obsolete, and anyone who disagreed “just doesn’t get it.” But Shapiro and Varian showed in detail how to understand the new internet economy in terms of standard economic concepts. They were mostly right, and Varian went on to become Google’s chief economist.

Today many tout a brave new AI-driven economic revolution, with some touting radical change. For example, a widely cited 2013 paper said:

47% of total US employment is in the high risk category … potentially automatable over … perhaps a decade or two.

Five years later, we haven’t yet seen changes remotely this big. And a new book is now a worthy successor to Information Rules:

In Prediction Machines, three eminent economists recast the rise of AI as a drop in the cost of prediction. With this single, masterful stroke, they lift the curtain on the AI-is-magic hype and show how basic tools from economics provide clarity about the AI revolution and a basis for action by CEOs, managers, policy makers, investors, and entrepreneurs.

As with Information Rules, these authors mostly focus on guessing the qualitative implications of such prediction machines. That is, they don’t say much about likely rates or magnitudes of change, but instead use basic economic analysis to guess likely directions of change. (Many example quotes below.) And I can heartily endorse almost all of these good solid guesses about change directions. A change in the cost of prediction is a fine way to frame recent tech advances, and if you want to figure out what they imply for your line of business, this is the book for you.

However, the book does at times go beyond estimating impact directions. It says “this time is different”, suggests “extraordinary changes over the next few years”, says an AI-induced recession might result from a burst of new tech, and the eventual impact of this tech will be similar to that of computers in general so far:

Everyone has had or will soon have an AI moment. We are accustomed to a media saturated with stories of new technologies that will change our lives. … Almost all of us are so used the the constant drumbeat of technology news that we numbly recite that the only thing immune to change is change itself. Until have our AI moment. Then we realize that this technology is different. p.2

In various ways, prediction machines can “use language, form abstractions and concepts, solve the kinds of problem now [as of 1955] reserve for humans, and improve themselves.” We do not speculate on whether this process heralds the arrival of general artificial intelligence, “the Singularity”, or Skynet. However, as you will see, this narrower focus on prediction still suggests extraordinary changes over the next few years. Just as cheap arithmetic enabled by computers proved powerful in using in dramatic change in business and personal lives, similar transformations will occur due to cheap prediction. p.39

Once an AI is better than humans at a particular task, job losses well happen quickly. We can be confident that new jobs will arise with a few ears and people will have something to do, but that will be little comfort for those looking for work and waiting for those new jobs to appear. An AI-induced recession is not out of the question. p.212

And they offer a motivating example that would require pretty advanced tech:

At some point, as it turns the knob, the AI’s prediction accuracy crosses a threshold, changing Amazon’s business model. The prediction becomes sufficiently accurate that it becomes more profitable for Amazon to ship you the goods that it predicts you will want rather than wait for you to order them. p.16

I can’t endorse any of these suggestions about magnitudes and rates of change. I estimate much smaller and slower change. But the book doesn’t argue for any of these claims, it more assumes them, and so I won’t bother to argue the topic here either. The book only mentions radical scenarios a few more times:

But is this time different? Hawking’s concern, shared by many, is that this time might be unusual because AI may squeeze out the last remaining advantages humans have over machines. How might an economist approach this question? … If you favor free trade between countries, then you … support developing AI, even if it replaces some jobs. Decades of research into the effect of trade show that other jobs will appear, and overall employment will not plummet. p.211

For years, economists have faced criticism that the agents on which we see our theories are hyper-rational and unrealistic models of human behavior. True enough, but when it comes to superintelligence, that means we have glen on the right track. … Thus economics provides a powerful way to understand how a society of superintelligent AIs will evolve. p.222

Yes, research is underway to make prediction machines work in broader settings, but the break-through that will give rise to general artificial intelligence remains undiscovered. Some believe that AGI is so far out that we should not spend cycles worrying about it. … As with many AI-related issues, the future is highly uncertain. Is this the end of the world as we know it? not yet, but it is the end of this book. Companies are deploying AIs right now. In applying the simple economics that underpin lower-cost prediction and higher-value complements to prediction, your business can make ROI-optimizing choices and strategic decision with regard to AI. When we move beyond prediction machines to general artificial intelligence or even superintelligence, whatever that may be, then we will be at a different AI moment. That is something everyone agrees upon. p.223

As you can see, they don’t see radical scenarios as coming soon, nor see much urgency regarding them. A stance I’m happy to endorse. And I also endorse all those insightful qualitative change estimates, as illustrated by these samples: Continue reading "Prediction Machines" »

GD Star Rating
a WordPress rating system
Tagged as: , ,

How Best Help Distant Future?

I greatly enjoyed Charles Mann’s recent book The Wizard and the Prophet. It contained the following stat, which I find to be pretty damning of academia:

Between 1970 and 1989, more than three hundred academic studies of the Green Revolution appeared. Four out of five were negative. p.437

Mann just did a related TED talk, which I haven’t seen, and posted this related article:

The basis for arguing for action on climate change is the belief that we have a moral responsibility to people in the future. But this is asking one group of people to make wrenching changes to help a completely different set of people to whom they have no tangible connection. Indeed, this other set of people doesn’t exist. There is no way to know what those hypothetical future people will want.

Picture Manhattan Island in the 17th century. Suppose its original inhabitants, the Lenape, could determine its fate, in perfect awareness of future outcomes. In this fanciful situation, the Lenape know that Manhattan could end up hosting some of the world’s great storehouses of culture. All will give pleasure and instruction to countless people. But the Lenape also know that creating this cultural mecca will involve destroying a diverse and fecund ecosystem. I suspect the Lenape would have kept their rich, beautiful homeland. If so, would they have wronged the present?

Economists tend to scoff at these conundrums, saying they’re just a smokescreen for “paternalistic” intellectuals and social engineers “imposing their own value judgments on the rest of the world.” (I am quoting the Harvard University economist Martin Weitzman.) Instead, one should observe what people actually do — and respect that. In their daily lives, people care most about the next few years and don’t take the distant future into much consideration. …

Usually economists use 5 percent as a discount rate — for every year of waiting, the price goes down 5 percent, compounded. … The implications for climate change are both striking and, to many people, absurd: at a 5 percent discount rate, economist Graciela Chichilnisky has calculated, “the present value of the earth’s aggregate output discounted 200 years from now is a few hundred thousand dollars.” … Chichilnisky, a major figure in the IPCC, has argued that this kind of thinking is not only ridiculous but immoral; it exalts a “dictatorship of the present” over the future.

Economists could retort that people say they value the future, but don’t act like it, even when the future is their own. And it is demonstrably true that many — perhaps most — men and women don’t set aside for retirement, buy sufficient insurance, or prepare their wills. If people won’t make long-term provisions for their own lives, why should we expect people to bother about climate change for strangers many decades from now? …

In his book, Scheffler discusses Children of Men … The premise of both book and film is that humanity has become infertile, and our species is stumbling toward extinction. … Our conviction that life is worth living is “more threatened by the prospect of humanity’s disappearance than by the prospect of our own deaths,” Scheffler writes. The idea is startling: the existence of hypothetical future generations matters more to people than their own existence. What this suggests is that, contrary to economists, the discount rate accounts for only part of our relationship to the future. People are concerned about future generations. But trying to transform this general wish into specific deeds and plans is confounding. We have a general wish for action but no experience working on this scale, in this time-frame. …

Overall, climate change asks us to reach for higher levels on the ladder of concern. If nothing else, the many misadventures of foreign aid have shown how difficult it is for even the best-intentioned people from one culture to know how to help other cultures. Now add in all the conundrums of working to benefit people in the future, and the hurdles grow higher. Thinking of all the necessary actions across the world, decade upon decade — it freezes thought. All of which indicates that although people are motivated to reach for the upper rungs, our efforts are more likely to succeed if we stay on the lower, more local rungs.

I side with economists here. The fact that we can relate emotionally to Children of Men hardly shows that people would actually react as it depicts. Fictional reactions often differ greatly from real ones. And I’m skeptical of Mann’s theory that we really do care greatly about helping the distant future, but are befuddled by the cognitive complexity of the task. Consider two paths to helping the distant future:

  1. Lobby via media and politics for collective strategies to prevent global warming now.
  2. Save resources personally now to be spent later to accommodate any problems then.

The saving path seems much less cognitively demanding than the lobby path, and in fact quite feasible cognitively. Resources will be useful later no matter what are the actual future problems and goals. Yes, the saving path faces agency costs, to control distant future folks tasked with spending your savings. But the lobby path also has agency costs, to control government as an agent.

Yes, the value of the saving path relative to the lobby path is reduced to the degree that prevention is cheaper than accommodation, or collective action more effective than personal action. But the value of the saving path increases enormously with time, as investments typically grow about 5% per year. And cognitive complexity costs of the lobby path also increase exponentially with time, as it becomes harder to foresee the problems and values of the distant future. (Ems wouldn’t be grateful for your global warming prevention, for example.)

Wait long enough to help and the relative advantage of the saving path should become overwhelming. So the fact that we see far more interest in the lobby path, relative to the savings path, really does suggest that people just don’t care that much about the distant future, and that global warning concern is a smokescreen for other policy agendas. No matter how many crocodile tears people shed regarding fictional depictions.

Added 5a: The posited smokescreen motive would be hidden, and perhaps unconscious.

Added 6p: I am told that in a half dozen US it is cheap to create trusts and foundations that can accumulate assets for centuries, and then turn to helping with problems then, all without paying income or capital gains taxes on the accumulating assets.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Like the Ancients, We Have Gods. They’ll Get Greater.

Here’s a common story about gods. Our distant ancestors didn’t understand the world very well, and their minds contained powerful agent detectors. So they came to see agents all around them, such as in trees, clouds, mountains, and rivers. As these natural things vary enormously in size and power, our ancestors had to admit that such agents varied greatly in size and power. The big ones were thus “gods”, and to be feared. While our forager ancestors were fiercely egalitarian, and should thus naturally resent the existence of gods, gods were at least useful in limiting status ambitions of local humans; however big you were, you weren’t as big as gods. All-seeing powerful gods were also useful in enforcing norms; norm violators could expect to be punished by such gods.

However, once farming era war, density, and capital accumulation allowed powerful human rulers, these rulers co-opted gods to enforce their rule. Good gods turned bad. Rulers claimed the support of gods, or claimed to be gods themselves, allowing their decrees to take priority over social norms. However, now that we (mostly) know that there just isn’t a spirit world, and now that we can watch our rulers much more closely, we know that our rulers are mere humans without the support of gods. So we much less tolerate strong rulers, their claims of superiority, or their norm violations. Yay us.

There are some problems with this story, however. Until the Axial revolution of about 3500 years ago, most gods were local to a social group. For our forager ancestors, this made them VERY local, and thus typically small. Such gods cared much more that you show them loyalty than what you believed, and they weren’t very moralizing. Most gods had limited power; few were all-powerful, all-knowing, and immortal. People mostly had enough data to see that their rulers did not have vast personal powers. And finally, rather than reluctantly submitting to gods out of fear, we have long seen people quite eager to worship, praise, and idolize gods, and also their leaders, apparently greatly enjoying the experience.

Here’s a somewhat different story. Long before they became humans, our ancestors deeply craved both personal status, and also personal association with others who have the high status. This is ancient animal behavior. Forager egalitarian norms suppressed these urges, via emphasizing the also ancient envy and resentment of the high status. Foragers came to distinguish dominance, the bad status that forces submission via power, from prestige, the good status that invites you to learn and profit by watching and working with them. As part of their larger pattern of hidden motives, foragers often pretended that they liked leaders for their prestige, even when they really also accepted and even liked their dominance.

Once foragers believed in spirits, they also wanted to associate with high status spirits. Spirits increased the supply of high status others to associate with, which people liked. But foragers also preferred to associated with local spirits, to show local loyalties. With farming, social groups became larger, and status ambitions could also rise. Egalitarian norms were suppressed. So there came a demand for larger gods, encompassing the larger groups.

In this story the fact that ancient gods were spirits who could sometimes violate ordinary physical rules was incidental, not central. The key driving force was a desire to associate with high status others. The ability to violate physical rules did confer status, but it wasn’t a different kind of status than that held by powerful humans. So very powerful humans who claimed to be gods weren’t wrong, in terms of the essential dynamic. People were eager to worship and praise both kinds of gods, for similar reasons.

Thus today even if we don’t believe in spirts, we can still have gods, if we have people who can credibly acquire very high status, via prestige or dominance. High enough to induce not just grudging admiration, but eager and emotionally-unreserved submission and worship. And we do in fact have such people. We have people who are the best in the world at the abilities that the ancients would recognize for status, such as physical strength and coordination, musical or story telling ability, social savvy, and intelligence. And in addition, technology and social complexity offer many new ways to be impressive. We can buy impressive homes, clothes, and plastic surgery, and travel at impressive speeds via impressive vehicles. We can know amazing things about the universe, and about our social world, via science and surveillance.

So we today do in fact have gods, in effect if not in name. (Though actors who play gods on screen can be seen as ancient-style gods.) The resurgence of forager values in the industrial era makes us reluctant to admit it, but a casual review of celebrity culture makes it very clear, I’d say. Yes, we mostly admit that our celebrities don’t have supernatural powers, but that doesn’t much detract from the very high status that they have achieved, or our inclination to worship them.

While it isn’t obviously the most likely scenario, one likely and plausible future scenario that has been worked out in unusual detail is the em scenario, as discussed in my book Age of Em. Ems would acquire many more ways to be individually impressive, acquiring more of the features that made the mythical ancient gods so impressive. Ems could be immortal, occupy many powerful and diverse physical bodies, move around the world at the speed of light, think very very fast, have many copies, and perhaps even somewhat modify their brains to expand each copy’s mental capacity. Automation assistants could expand their abilities even more.

As most ems are copies of the few hundred most productive ems, there are enormous productivity differences among typical ems. By any reasonable measure, status would vary enormously. Some would be gods relative to others. Not just in a vague metaphorical sense, but in a deep gut-grabbing emotional sense. Humans, and ems, will deeply desire to associate with them, via praise, worship and more.

Our ancestors had gods, we have gods, and our descendants will like have even greater more compelling gods. The phenomena of gods is quite far from dead.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Toward Micro-Likes

Long ago when electricity and phones were new, they were largely unregulated, and privately funded. But then as the tech (and especially the interfaces) stopped changing so fast, and showed big scale and network economies, regulation stepped in. Today social media still seems new. But as it hasn’t been changing as much lately, and it also shows large scale and network economies, many are talking now about heavier regulation. In this post, let me suggest that a lot more change is possible; we aren’t near the sort of stability that electricity and phones reached when they became heavily regulated.

Back in the early days of the web and internet people predicted many big radical changes. Yet few then mentioned social media, the application now most strongly associated with this new frontier. What did we miss? The usual story, which I find plausible, is that we missed just how much people love to get many frequent signals of their social connections: likes, retweets, etc. Social media gives us more frequent “attaboy” and “we see & like you” signals. People care more than we realized about the frequency, relative to the size, of such signals.

But if that’s the key lesson, social media should be able to move a lot further in this direction. For example, today Facebook has two billion monthly users and produces four million likes per minute, for an average of about three likes per day per monthly user. Twitter has 300 million monthly users, who send 500 million tweets per day, for less than two tweets per day per monthly user. (I can’t find stats on Twitter likes or retweets.) Which I’d say is actually a pretty low rate of positive feedback.

Imagine you had a wall-sized screen, full of social media items, and that while you browsed this wall the direction of your gaze was tracked continuously to see which items your gaze was on or near. From that info, one could give the authors or subjects of those items far more granular info on who is paying how much attention to them. Not only on how often how much your stuff is watched, but also on the mood and mental state of those watchers. If some of those items were continuous video feeds from other people, then those others could be producing many more social media items to which others could attend.

Also, so far we’ve usually just naively counted likes, retweets, etc., as if everyone counted the same. But we could instead use non-uniform weights based on popularity or other measures. And given how much people like to participate in synchronized rituals, we could also create and publicize statistics on what groups of people are how synchronized in their social media actions. And offer new tools to help them synchronize more finely.

My point here isn’t to predict or recommend specific changes for future social media. I’m instead just trying to make the point that a lot of room for improvement remains. Such gains might be delayed or prevented by heavy regulation.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Growth Is Change. So Is Death.

The very readable book The Wizard and the Prophet tells the story of environmental prophet William Vogt investigating the apocalypse-level deaths of guano-making birds near Peru. When he discovered the cause in the El Nino weather cycle, his policy recommendations were to do nothing to mitigate this natural cause; he instead railed against many much smaller human influences, demanding their reversal. A few years later his classic 1948 screed Road To Survival, which contained pretty much all the standard environmental advice and concepts used today, continued to warn against any but small human-caused changes to the environment, while remaining largely indifferent to even huge natural changes.

I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

But of course few are very good at resolving their near versus far incoherences. And so the positions people take end up depending a lot on how they first framed the key issues, as in terms of short or long term changes.

GD Star Rating
a WordPress rating system
Tagged as: , ,

On Value Drift

The outcomes within any space-time region can be seen as resulting from 1) preferences of various actors able to influence the universe in that region, 2) absolute and relative power and influence of those actors, and 3) constraints imposed by the universe. Changes in outcomes across regions result from changes in these factors.

While you might mostly approve of changes resulting from changing constraints, you might worry more about changes due to changing values and influence. That is, you likely prefer to see more influence by values closer to yours. Unfortunately, the consistent historical trend has been for values to drift over time, increasing the distance between random future and current values. As this trend looks like a random walk, we see no obvious limit to how far values can drift. So if the value you place on the values of others falls rapidly enough with the distance between values, you should expect long term future values to be very wrong.

What influences value change?
Inertia – The more existing values are tied to important entrenched systems, the less they change.
Growth – On average, over time civilization collects more total influence over most everything.
Competition – If some values consistently win key competitive contests, those values become more common.
Influence Drift – Many processes that change the world produce random drift in agent influence.
Internal Drift – Some creatures, e.g., humans, have values that drift internally in complex ways.
Culture Drift – Some creatures, e.g., humans, have values that change together in complex ways.
Context – Many of the above processes depend on other factors, such as technology, wealth, a stable sun, etc.

For many of the above processes, rates of change are roughly proportional to overall social rates of change. As these rates of change have been increased over time, we should expect faster future change. Thus you should expect values to drift faster in the future than then did in the past, leading faster to wrong values. Also, people are living longer now than they did in the past. So even past people didn’t live long enough to see big enough changes to greatly bother them, future people may live to see much more change.

Most increases in the rates of change have been concentrated in a few sudden large jumps (associated with the culture, farmer, and industry transitions). As a result, you should expect that rates of change may soon increase greatly. Value drift may continue at past rates until it suddenly goes much faster.

Perhaps you discount the future rapidly, or perhaps the value you place on other values falls slowly with value distance. In these cases value drift may not disturb you much. Otherwise, the situation described above may seem pretty dire. Even if previous generations had to accept the near inevitability of value drift, you might not accept it now. You may be willing to reach for difficult and dangerous changes that could remake the whole situation. Such as perhaps a world government. Personally I see that move as too hard and dangerous for now, but I could understand if you disagree.

The people today who seem most concerned about value drift also seem to be especially concerned about humans or ems being replaced by other forms of artificial intelligence. Many such people are also concerned about a “foom” scenario of a large and sudden influence drift: one initially small computer system suddenly becomes able to grow far faster than the rest of the world put together, allowing it to quickly take over the world.

To me, foom seems unlikely: it posits an innovation that is extremely lumpy compared to historical experience, and in addition posits an unusually high difficulty of copying or complementing this innovation. Historically, innovation value has been distributed with a long thin tail: most realized value comes from many small innovations, but we sometimes see lumpier innovations. (Alpha Zero seems only weak evidence on the distribution of AI lumpiness.) The past history of growth rates increases suggests that within a few centuries we may see something, perhaps a very lumpy innovation, that causes a growth rate jump comparable in size to the largest jumps we’ve ever seen, such as at the origins of life, culture, farming, and industry. However, as over history the ease of copying and complementing such innovations has been increasing, it seems unlikely that copying and complementing will suddenly get much harder.

While foom seems unlikely, it does seems likely that within a few centuries we will develop machines that can outcompete biological humans for most all jobs. (Such machines might also outcompete ems for jobs, though that outcome is much less clear.) The ability to make such machines seems by itself sufficient to cause a growth rate increase comparable to the other largest historical jumps. Thus the next big jump in growth rates need not be associated with a very lumpy innovation. And in the most natural such scenarios, copying and complementing remain relatively easy.

However, while I expect machines that outcompete humans for jobs, I don’t see how that greatly increases the problem of value drift. Human cultural plasticity already ensures that humans are capable of expressing a very wide range of values. I see no obviously limits there. Genetic engineering will allow more changes to humans. Ems inherit human plasticity, and may add even more via direct brain modifications.

In principle, non-em-based artificial intelligence is capable of expressing the entire space of possible values. But in practice, in the shorter run, such AIs will take on social roles near humans, and roles that humans once occupied. This should force AIs to express pretty human-like values. As Steven Pinker says:

Artificial intelligence is like any other technology. It is developed incrementally, designed to satisfy multiple conditions, tested before it is implemented, and constantly tweaked for efficacy and safety.

If Pinker is right, the main AI risk mediated by AI values comes from AI value drift that happens after humans (or ems) no longer exercise such detailed frequent oversight.

It may be possible to create competitive AIs with protected values, i.e., so that parts where values are coded are small, modular, redundantly stored, and insulated from changes to the rest of the system. If so, such AIs may suffer much less from internal drift and cultural drift. Even so, the values of AIs with protected values should still drift due to influence drift and competition.

Thus I don’t see why people concerned with value drift should be especially focused on AI. Yes, AI may accompany faster change, and faster change can make value drift worse for people with intermediate discount rates. (Though it seems to me that altruistic discount rates should scale with actual rates of change, not with arbitrary external clocks.)

Yes, AI offers more prospects for protected values, and perhaps also for creating a world/universe government capable of preventing influence drift and competition. But in these cases if you are concerned about value drift, your real concerns are about rates of change and world government, not AI per se. Even the foom scenario just temporarily increases the rate of influence drift.

Your real problem is that you want long term stability in a universe that more naturally changes. Someday we may be able to coordinate to overrule the universe on this. But I doubt we are close enough to even consider that today. To quote a famous prayer:

God, grant me the serenity to accept the things I cannot change,
Courage to change the things I can,
And wisdom to know the difference.

For now value drift seems one of those possibly lamentable facts of life that we cannot change.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Small Change Good, Big Change Bad?

Recently I posted on how many seek spiritual insight via cutting the tendency of their minds to wander, yet some like Scott Alexandar fear ems with a reduced tendency to mind wandering because they’d have less moral value. On twitter Scott clarified that he doesn’t mind modest cuts in mind wandering; what he fears is extreme cuts. And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.

On nature preserves, some fear eventually losing all of wild nature, but when arguing for any particular development others say we need new things and we still have plenty of nature. On military spending, some say the world is peaceful and we have many things we’d rather spend money on, while others say that societies who do not remain militarily vigilant are eventually conquered. On increasing inequality some say that high enough inequality must eventually result in inadequate human capital investments and destructive revolutions, while others say there’s little prospect of revolution now and inequality has historically only fallen much in big disasters such as famine, war, and state collapse. On value drift, some say it seems right to let each new generation choose its values, while others say a random walk in values across generations must eventually drift very far from current values.

If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.

Third, our ability to foresee the future rapidly declines with time. The more other things that may happen between today and some future date, the harder it is to foresee what may happen at that future date. We should be increasingly careful about the inferences we draw about longer terms.

Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

GD Star Rating
a WordPress rating system
Tagged as: ,