Future Influence Is Hard

Imagine that one thousand years ago you had a rough idea of the most likely overall future trajectory of civilization. For example, that an industrial revolution was likely in the next few millenia. Even with that unusual knowledge, you would find it quite hard to take concrete actions back then to substantially change the course of future civilization. You might be able to mildly improve the chances for your family, or perhaps your nation. And even then most of your levers of influence would focus on events in the next few years or decades, not millenia in the future.

One thousand years ago wasn’t unusual in this regard. At most any place-time in history it would have been quite hard to substantially influence the future of civilization, and most of your influence levers would focus on events in the next few decades.

Today, political activists often try to motivate voters by claiming that the current election is the most important one in a generation. They say this far more often than once per generation. But they’ve got nothing on futurists, who often say individuals today can have substantial influence over the entire future of the universe. From a recent Singularity Weblog podcast  where Socrates interviews Max Tegmark:

Tegmark: I don’t think there’s anything inevitable about the human future. We are in a very unstable situation where its quite clear that it could go in several different directions. The greatest risk of all we face with AI and the future of technology is complacency, which comes from people saying things are inevitable. What’s the one greatest technique of psychological warfare? Its to convince people “its inevitable; you’re screwed.” … I want to do exactly the opposite with my book, I want to make people feel empowered, and realize that this is a unique moment after 13.8 billions years of history, when we, people who are alive on this planet now, can actually make a spectacular difference for the future of life, not just on this planet, but throughout much of the cosmos. And not just for the next election cycle, but for billions of years. And the greatest risk is that people start believing that something is inevitable, and just don’t put in their best effort. There’s no better way to fail than to convince yourself that it doesn’t matter what you do.

Socrates: I actually also had a debate with Robin Hanson on my show because in his book the Age of Em he started by saying basically this is how is going to be, more or less. And I told him, I told him I totally disagree with you because it could be a lot worse or it could be a lot better. And it all depends on what we are going to do right now. But you are kind of saying this is how things are going to be. And he’s like yeah because you extrapolate. …

Tegmark: That’s another great example. I mean Robin Hanson is a very creative guy and its a very thought provoking book, I even wrote a blurb for it. But we can’t just say that’s how its going to be, because he even says himself that the Age of Em will only last for two years from the outside perspective. And our universe is going to be around for billions of years more. So surely we should put effort into making sure the rest becomes as great as possible too, shouldn’t we.

Socrates: Yes, agreed. (44:25-47:10)

Either individuals have always been able to have a big influence on the future universe, contrary to my claims above, or today is quite unusual. In which case we need concrete arguments for why today is so different.

Yes, it is possible to underestimate our influence, but surely it is also possible to overestimate that.  I see no nefarious psychological warfare agency working to induce underestimation, but instead see great overestimation due to value signaling.

Most people don’t think much about the long term future, but when they do far more of them see the future as hard to foresee than hard to influence. Most groups who discuss the long term future focus on which kinds of overall outcomes would most achieve their personal values; they pay far less attention to how concretely one might induce such outcomes. This serves the function of letting people using future talk as a way to affirm their values, but overestimates influence.

My predictions in Age of Em are given the key assumption of ems as the first machines able to replace most all human labor. I don’t say influence is impossible, but instead say individual influence is most likely quite minor, and so should focus on choosing small variations on the most likely scenarios one can identify.

We are also quite unlikely to have long term influence that isn’t mediated by intervening events. If you can’t think of way to influence an Age of Em, if that happens, you are even less likely to influence ages that would follow it.

GD Star Rating
loading...
Tagged as:

Be A Dad

A key turning point in my life was when my wife declared that her biological clock said she wanted kids now. I hadn’t been thinking of kids, and the prospect didn’t inspire much passion in me; my life had focused on other things. But I wanted to please my wife, and I didn’t much object, so we had kids. I now see that as one of the best choices I’ve made in my life. I thank my wife for pushing me to it.

Stats suggest that while parenting doesn’t make people happier, it does give them more meaning. And most thoughtful traditions say to focus more on meaning that happiness. Meaning is how you evaluate your whole life, while happiness is how you feel about now. And I agree: happiness is overrated.

Parenting does take time. (Though, as Bryan Caplan emphasized in a book, less than most think.) And many people I know plan to have an enormous positive influences on the universe, far more than plausible via a few children. But I think they are mostly kidding themselves. They fear their future selves being less ambitious and altruistic, but its just as plausible that they will instead become more realistic.

Also, many people with grand plans struggle to motivate themselves to follow their plans. They neglect the motivational power of meaning. Dads are paid more, other things equal, and I doubt that’s a bias; dads are better motivated, and that matters. Your life is long, most big world problems will still be there in a decade or two, and following the usual human trajectory you should expect to have the most wisdom and influence around age 40 or 50. Having kids helps you gain both.

And in addition, you’ll do a big great thing for your kids; you’ll let them exist. It isn’t that hard to ensure a reasonably happy and meaningful childhood. That’s a far surer gain than your grand urgent plans to remake the universe.

Having kids is actually the best-proven way to have a long term influence. So much so that biological evolution has focused almost entirely on it. By comparison, human cultural mechanisms to influence the future seem tentative, unreliable, and unproven, except when closely tied to having and raising kids. Let your portfolio of future influence attempts include both low-risk, as well as high-risk, approaches.

Added 2p: Of course our biases help us make our meanings, in parenting as elsewhere:

Belief in myths idealizing parenthood helps parents cope with the dissonance aroused by the high financial cost of raising children. (more; HT Eric Barker)

GD Star Rating
loading...
Tagged as: ,

Men Are Animals

I spent much of the (so far) middle of my life pursuing big/deep questions. That’s what I focused on in each subject I learned, that’s what I liked most about the first topic (physics) I specialized in, and that’s what drew me to each new subject as I switched.

It was good that I was able to stop jumping to new subjects, so that I could actually accomplish things. However, as I long avoided studying biology (too much disorganized detail!), but recently found an excuse to focus there, I’ve been enjoying my old pleasure via deep questions of biology. For example, I’ve long heard talk on the puzzle of why sex exists, and have heard many plausible arguments on why sex (mostly) won over asexual reproduction. But until a few days ago I hadn’t noticed a harder puzzle: why do most animals, but not most plants, have males?

Most plants are hermaphrodites; each organism has both male and female genitalia. Plants thus gain the many advantages of recombining genes from multiple parents, while also ensuring that each organism contributes fully to reproducing the species. Most animals, in contrast, reproduce via pairing one male and one female, with females investing more than males in children. In such species, males and females differ in many ways that can be understood as resulting from these differing investments in children.

Many of these differences seem to be costly for the species. For example, not only do males spend most of their resources competing with each other for access to females instead helping with children, their competition often directly harms females and children. In fact, species where males differ more from females go extinct more quickly:

When comparing species, it emerged that those in which males were very different from females had a poorer prognosis for continued existence. The authors’ models predict a tenfold increase in extinction risk per unit time when species in which males are larger than females, with large differences in shape between the sexes, are compared with species in which the males are smaller than the females, with small differences in shape between the sexes. (more)

And yet males exist, at least in most animal species. Why? More monogamous species, like humans, where fathers invest more in kids, are less of a puzzle, but they remain a puzzle as long as males invest less. As plants show that an all-hermaphrodite equilibrium can robustly last long for complex species, there must be some big positive advantage to having males in animal, but not plant, species.

After reading a dozen or so papers, I can report: we just have no idea what that advantage is. One person suggests males are “an ‘experimental’ part of the species that allows the species to expand their ecological niche, and to have alternative configurations.” But this idea doesn’t seem to have been developed very far, and why wouldn’t this work just as well for plants?

The robust existence of animal males strongly suggests that we men have an important but-as-yet-unknown mission. We offer a gain that more than pays for our many costs, at least in most animals species. And yet our costs seem much clearer than our gains. We men might feel a bit better about our place in the world if we could better understood our positive contributions. And yet very few people study this deep question, even as vast numbers remain very engaged discussing human gender politics. That seems a shame to me.

Added 9:30p: Plants do compete for and select mates. It isn’t obvious that mobility allows far more such competition.

Added 4a: You might have seen evolutionary competition as overly destructive, but existing because more cooperation requires more coordination, which is hard. But the existence of males shows that, at least for animals, evolution saw “red in tooth and claw” competition between hermaphrodites as insufficient. So evolution created and maintains an even stronger kind of competition, between males who need invest less in children and can thus invest even more in competition.

GD Star Rating
loading...
Tagged as: ,

Age of Em Paperback

Today is the official U.S. release date for the paperback version of my first book The Age of Em: Work, Love, and Life when Robots Rule the Earth. (U.K. version came out a month ago.) Here is the new preface:

I picked this book topic so it could draw me in, and I would finish. And that worked: I developed an obsession that lasted for years. But once I delivered the “final” version to my publisher on its assigned date, I found that my obsession continued. So I collected a long file of notes on possible additions. And when the time came that a paperback edition was possible, I grabbed my chance. As with the hardback edition, I had many ideas for changes that might make my dense semi-encyclopedia easier for readers to enjoy. But my core obsession again won out: to show that detailed analysis of future scenarios is possible, by showing just how many reasonable conclusions one can draw about this scenario.

Also, as this book did better than I had a right to expect, I wondered: will this be my best book ever? If so, why not make it the best it can be? The result is the book you now hold. It has over 42% more citations, and 18% more words, but it is only a bit easier to read. And now I must wonder: can my obsession stop now, pretty please?

Many are disappointed that I do not more directly declare if I love or hate the em world. But I fear that such a declaration gives an excuse to dismiss all this; critics could say I bias my analysis in order to get my desired value conclusions. I’ve given over 100 talks on this book, and never once has my audience failed to engage value issues. I remain confident that such issues will not be neglected, even if I remain quiet.

These are the only new sections in the paperback: Anthropomorphize, Motivation, Slavery, Foom, After Ems. (I previewed two of them here & here.)  I’ll make these two claims for my book:

  1. There’s at least a 5% chance that my analysis will usefully inform the real future, i.e., that something like brain emulations are actually the first kind of human-level machine intelligence, and my analysis is mostly right on what happens then. If it is worth having twenty books on the future, it is worth having a book with a good analysis of a 5% scenario.
  2. I know of no other analysis of a substantially-different-from-today future scenario that is remotely as thorough as Age of Em. I like to quip, “Age of Em is like science fiction, except there is no plot, no characters, and it all makes sense.” If you often enjoy science fiction but are frustrated that it rarely makes sense on closer examination, then you want more books like Age of Em. The success or not of Age of Em may influence how many future authors try to write such books.
GD Star Rating
loading...
Tagged as: ,

Sloppy Interior Vs. Careful Border Travel

Imagine that you are floating weightless in space, and holding on to one corner of a large cube-shaped structure. This cube has only corners and struts between adjacent corners; the interior and faces are empty. Now imagine that you want to travel to the opposite corner of this cube. The safe thing to do would be to pull yourself along a strut to an adjacent corner, always keeping at least one hand on a strut, and then repeat that process two more times. If you are in a hurry you might be tempted to just launch yourself through the middle of the cube. But if you don’t get the direction right, you risk sailing past the opposite corner on into open space.

Now let’s make the problem harder. You are still weightless holding on to a cube of struts, but now you live in 1000 dimensional space, in a fog, and subject to random winds. Each corner connects to 1000 struts. Now it would take 1000 single-strut moves to reach the opposite corner, while the direct distance across is only 32 times the length of one strut. You have only a limited ability to tell if you are near a corner or a strut, and now there are over 10300 corners, which look a lot alike. In this case you should be a lot more reluctant to leave sight of your nearest strut, or to risk forgetting your current orientation. Slow and steady wins this race.

If you were part of a group of dozens of people tethered together, it might make more sense to jump across the middle, at least in the case of the ordinary three dimensional cube. If any one of you grabs a corner or strut, they could pull the rest of you in to there. However, this strategy looks a lot more risky in a thousand dimensions with fog and wind, where there are so many more ways to go wrong. Even more so in a million dimensions.

Let me offer these problems as metaphors for the choice between careful and sloppy thinking. In general, you start with what you know now, and seek to learn more, in part to help you make key decisions. You have some degree of confidence in every relevant claim, and these can combine to specify a vector in a high dimensional cube of possible beliefs. Your key choice: how to move within this belief cube.

In a “sloppy interior” approach, you throw together weak tentative beliefs on everything relevant, using any basis available, and then try to crudely adjust them via considerations of consistency, evidence, elegance, rhetoric, and social conformity. You think intuitively, on your feet, and respond to social pressures. That is, a big group of you throw yourselves toward the middle of the cube, and pull on the tethers when you think that could help others get to a strut or corner you see. Sometimes a big group splits into two main groups who have a tug-o-war contest along one main tether axis, because that’s what humans do.

In a “careful border” approach, you try to move methodically along, or at least within sight of, struts. You make sure to carefully identify enough struts at your current corner to check your orientation and learn which strut to take next. Sometimes you “cut a corner”, jumping more than one corner at a time, but only via carefully chosen and controlled moves. It is great when you can move with a large group who work together, as individuals can specialize in particular strut directions, etc. But as there are more different paths to reach the same destination on the border, groups there more naturally split up. If your group seems inclined toward overly risk jumps, you can split off and move more methodically along the struts. Conversely, you might try to cut a corner to jump ahead when others nearby seem excessively careful.

Today public conversations tend more to take a sloppy interior approach, while expert conversations tend more to take a careful border approach. Academics often claim to believe nothing unless it has been demonstrated to the rigorous standards of their discipline, and they are fine with splitting into differing non-interacting groups that take different paths. Outsiders often see academics as moving excessively slowly; surely more corners could be cut with little risk. Public conversations, in contrast, are centered in much larger groups of socially-focused discussants who use more emotional, elegant, and less precise and expert language and reasoning tools.

Yes, this metaphor isn’t exactly right; for example, there is a sense in which we start more naturally from the middle a belief space. But I think it gets some important things right. It can feel more emotionally “relevant” to jump to where everyone else is talking, pick a position like others do there, use the kind of arguments and language they use, and then pull on your side of the nearest tug-o-war rope. That way you are “making a difference.” People who instead step slowly and carefully, making foundations they have sufficient confidence to build on, may seem to others as “lost” and “out of touch”, too “chicken” to engage the important issues.

And yes, in the short term sloppy interior fights have the most influence on politics, culture, and mob rule enforcement. But if you want to play the long game, careful border work is where most of the action is. In the long run, most of what we know results from many small careful moves of relatively high confidence. Yes, academics are often overly careful, as most are more eager to seem impressive than useful. And there are many kinds of non-academic experts. Even so, real progress is mostly in collecting relevant things one can say with high enough confidence, and slowly connecting them together into reliable structures that can reach high, not only into political relevance, but eventually into the stars of significance.

GD Star Rating
loading...
Tagged as: ,

How Does Evolution Escape Local Maxima?

I’ve spend most of my intellectual life as a theorist, but alas it has been a while since I’ve taken the time to learn a new powerful math-based theory. But in the last few days I’ve enjoyed studying Andreas Wagner’s theories of evolutionary innovation and robustness. While Wagner has some well-publicized and reviewed books, such as Arrival of the Fittest (2014) and Robustness and Evolvability in Living Systems (2005), the best description of his key results seems to be found in a largely ignored 2011 book: The Origins of Evolutionary Innovations. Which is based on many academic journal articles.

In one standard conception, evolution does hill-climbing within a network of genotypes (e.g, DNA sequence), rising according to a “fitness” value associated with the phenotype (e.g., tooth length) that results from each genotype. In this conception, a big problem is local maxima: hill-climbing stops once all the neighbors of a genotype have a lower fitness value. There isn’t a way to get to a higher peak if one first must travel through a lower valley to reach it. Maybe random noise could let the process slip through a narrow shallow valley, but what about valleys that are wide and deep? (This is a familiar problem in computer-based optimization search.)

Wagner’s core model looks at the relation between genotypes and phenotypes for metabolism in an organism like E. coli. In this context, Wagner defines a genotype as the set of chemical reactions which the enzymes of an organism can catalyze, and he defines a phenotype as the set of carbon-source molecules from which an organism could create all the other molecules it needs, assuming that this source was its only place to get carbon (but allowing many sources of other needed molecules). Wagner defines the neighbors of a genotype as those that differ by just one reaction.

There are of course far more types of reactions between molecules than there are types of molecules. So using Wagner’s definitions, the set of genotypes is vastly larger than the set of phenotypes. Thus a great many genotypes result in exactly the same phenotype, and in fact each genotype has many neighboring genotypes with that same exact phenotype. And if we lump all the connected genotypes that have the same phenotype together into a unit (a unit Wagner calls a “genotype network”), and then look at the network of one-neighbor connections between such units, we will find that this network is *highly connected.

That is, if one presumes that evolution (using a large population of variants) finds it easy to make “neutral” moves between genotypes with exactly the same phenotype, and hence the same fitness, then large networks connecting genotypes with the same phenotype imply that it only takes a few non-neutral moves between neighbors to get to most other phenotypes. There are no wide deep valleys to cross. Evolution can search large spaces of big possible changes, and doesn’t have a problem finding innovations with big differences.

Wagner argues that there are also far more genotypes than phenotypes for two other cases: the evolution of DNA sequences that set the regulatory interactions among regulatory proteins, and for the sequences of ribonucleotides or amino acids that determine the structure and chemical activity of molecules.

In addition, Wagner also shows the same applies to a computer logic gate toy problem. In this problem, there are four input lines, four output lines, and sixteen binary logic gates between. The genotype specifies the type of each gate and the set of wires connecting all these things, while the phenotype is the mapping between input and output gates. Again, there are far more genotypes than phenotypes. However, the observant reader will notice that all mappings between four inputs and four outputs can be produced using only four internal gates; sixteen gates is a factor of four more than needed. But in the case of four gates the set of genotypes is not big enough compared to the set of phenotypes to allow easy evolution. For easy innovation, sixteen gates is enough, but four gates is not.

If we used a larger space of genotypes within which the number of logic gates could vary, and if the fitness function had a penalty for using more logical gates, then we’d have a problem. No matter where the genotype started, evolution might quickly cut the number of gates down to the minimum needed to implement its current input-output mapping, and then after that too few neutral changes would be possible to make evolution easy. The same problem seems possible in Wagner’s core model of metabolism; if the fitness function has a penalty for the number of enzymes used, evolution might throw away enzymes not needed to produce the current phenotype, after which too few neutral changes might be possible to allow easy evolution.

Wagner’s seems to suggest a solution: larger more complex systems are needed for robustness to varying environments:

Based on our current knowledge, the metabolic reaction networks of E. coli and yeast comprise more than 900 chemical reactions. However in a glucose minimal environment, more than 60 percent of these reactions are silent. … Overall, in E. coli, the fraction of reactions that would not reduce bio-mass growth when eliminated exceeds 70 percent. This is … a general property of viable networks that have similar complexity. … As a metabolic generalist, the E. coli metabolic network can synthesize its biomass from more than 80 alternative carbon sources. … All these observations indicate that the large metabolic networks of free-living organisms are much more complex than necessary to sustain life in any one environment. Their complexity arises from their viability in multiple environments. A consequence is that these networks appear highly robust to reaction removal in any one environment, where every metabolic networks has multiple natural neighbors. This neutrality, however, is conditional on the environment. (pp.153-154)

I’m not sure this solves the problem, however. In the logic gate toy problem, even if phenotype fitness is given by a weighted average over environments, we’ll still have the same temptation to increase fitness by dropping gates not needed to implement the current best bit mapping. In the case of enzymes for metabolism, fitness given by a weighted average of environments may also promote an insufficient complexity of enzymes. It seems we need a model that can represent the value of holding gate or enzyme complexity in reserve against the possibility of future changes.

I worry that this more realist model, whatever it may be, may contain a much larger set of phenotypes, so that the set of genotypes is no longer much larger, and so no longer guarantees many neutral changes to genotypes. Perhaps a “near neutrality” will apply, so that many genotype neighbors have only small fitness differences. But it may require a much more complex analysis to show that outcome; mere counting may not be enough. I still find it hard to believe that for realistic organisms, the set of possible phenotypes is much less than the set of genotypes. Though perhaps I could believe that many pairs of genotypes produce the same distribution over phenotypes, as environments vary.

Added 10am: Another way to say this: somehow the parameter that sets how much complexity to keep around has to change a lot slower than do most other parameters encoded in the genome. In this way it could notice the long term evolvability benefits of complexity.

GD Star Rating
loading...
Tagged as: ,

Two Types of Future Filters

In principle, any piece of simple dead matter in the universe could give rise to simple life, then to advanced life, then to an expanding visible civilization. In practice, however, this has not yet happened anywhere in the visible universe. The “great filter” is sum total of all the obstacles that prevent this transition, and our observation of a dead universe tells us that this filter must be enormous.

Life and humans here on Earth have so far progressed some distance along this filter, and we now face the ominous question: how much still lies ahead? If the future filter is large, our changes of starting an expanding visible civilization are slim. While being interviewed on the great filter recently, I was asked what I see as the most likely future filter. And in trying to answer, I realized that I have changed my mind.

The easiest kind of future filter to imagine is a big external disaster that kills all life on Earth. Like a big asteroid or nearby supernovae. But when you think about it, it is very hard to kill all life on Earth. Given how long Earth as gone without such an event, the odds of it happening in the next millions years seems quite small. And yet a million years seems plenty of time for us to start an expanding visible civilization, if we were going to do that.

Yes, compared to killing all life, we can far more easily imagine events that destroy civilization, or kill all humans. But the window for Earth to support life apparently extends another 1.5 billion years into our future. As that window duration should roughly equal the typical duration between great filter steps in the past, it seems unlikely that any such steps have occurred since a half billion years ago, when multicellular life started becoming visible in the fossil record. For example, the trend toward big brains seems steady enough over that period to make big brains unlikely as a big filter step.

Thus even a disaster that kills most all multicellular life on Earth seems unlikely to push life back past the most recent great filter step. Life would still likely retain sex, Eukaryotes, and much more. And with 1.5 billion years to putter, life seems likely to revive multicellular animals, big brains, and something as advanced as humans. In which case there would be a future delay of advanced expanding life, but not a net future filter.

Yes, this analysis is regarding “try-try” filter steps, where the world can just keep repeatedly trying until it succeeds. In principle there can also be “first or never” steps, such as standards that could in principle go many ways, but which lock in forever once they pick a particular way. But it still seems hard to imagine such steps in the last half billion years.

So far we’ve talked about big disasters due to external causes. And yes, big internal disasters like wars are likely to be more frequent. But again the problem is: a disaster that still leaves enough life around could evolve advanced life again in 1.5 billion years, resulting in only a delay, not a filter.

The kinds of disasters we’ve been considering so far might be described as “too little coordination” disasters. That is, you might imagine empowering some sort of world government to coordinate to prevent them. And once such a government became possible, if it were not actually created or used, you might blame such a disaster in part on our failing to empower a world government to prevent them.

Another class of disasters, however, might be described as “too much coordination” disasters. In these scenarios, a powerful world government (or equivalent global coalition) actively prevents life from expanding visibly into the universe. And it continues to do so for as long as life survives. This government might actively prevent the development of technology that would allow such a visible expansion, or it might allow such technology but prevent its application to expansion.

For example, a world government limited to our star system might fear becoming eclipsed by interstellar colonists. It might fear that colonists would travel so far away as to escape the control of our local world government, and then they might collectively grow to become more powerful than the world government around our star.

Yes, this is not a terribly likely scenario, and it does seem hard to imagine such a lockdown lasting for as long as does advanced civilization capable of traveling to other stars. But then scenarios where all life on Earth gets killed off also seem pretty unlikely. It isn’t at all obvious to me that the too little coordination disasters are more likely than the too much coordination disasters.

And so I conclude that I should be in-the-ballpark-of similarly worried about both categories of disaster scenarios. Future filters could result from either too little or too much coordination. To prevent future filters, I don’t know if it is better to have more or less world government.

GD Star Rating
loading...
Tagged as: , ,

More Than Death, Fear Decay

Most known “systems” decay, rot, age, and die. We usually focus on the death part, but the more fundamental problem is decay (a.k.a. rotting, aging). Death is almost inevitable, as immortality is extremely difficult to achieve. Systems that don’t decay can still die; we sometimes see systems where the chance of death stays constant over time. But for most complex systems, the chance of death rises with time, due to decay.

Many simple physical systems, like chairs, decay because the materials of their parts decay. Such systems can often be rejuvenated by replacing those materials. More generally, simple modular systems can be rejuvenated by replacing the modular parts that decay. For example, it is possible to spend enough to maintain most cars and buildings indefinitely in a nearly original condition, though we rarely see this as worth the bother.

Complex adaptive systems (CAS), such as firms, have many parts in complex relations, relations that change in an attempt to adapt to changing conditions. When a CAS changes its design and structure to adapt, however, this rarely results in modular sub-designs that can be swapped out. Alas, the designs of most known CAS decay as they adapt. In biological organisms this is called “aging”, in software it is called “rot”, and in product design this is called the “innovators dilemma”. Human brains change from having “fluid” to “crystalized” intelligence, and machine learning systems trained in one domain usually find it harder to learn quite different domains. We also see aging in production plans, firms, empires, and legal systems. I don’t know of data on whether things like cities, nations, professions, disciplines, languages, sports, or art genres age. But it isn’t obvious that they don’t also decay.

It is not just that it is easier to create and train new CAS, relative to rejuvenating old ones. It seems more that we just don’t know how to prevent rot at any remotely reasonable cost. In software, designers often try to “refactor” their systems to slow the process of aging. And sometimes such designers report that they’ve completely halted aging. But these exceptions are mostly in systems that are small and simple, with stable environments, or with crazy amounts of redesign effort.

However, I think we can see at least one clear exception to this pattern of rotting CAS: some generalist species. If the continually changing environment of Earth caused all species to age at similar rates, then over the history of life on Earth we would see a consistent trend toward a weaker ability of life to adapt to changing conditions. Eventually life would lose its ability to sufficient adapt, and life would die out. If some kinds of life could survive in a few very slowly changing garden environments, then eventually all life would descend from the stable species that waited unchanging in those few gardens. The longer it had been since a species had descended from a stable garden species, the faster that species would die out.

But that isn’t what we see. Instead, while species that specialize to particular environments do seem to go extinct more easily, generalist species seem to maintain their ability to adapt across eons, even after making a great many adaptations. Somehow, the designs of generalist species do not seem to rot, even though typical organisms within that species do rot. How do they do that?

It is possible that biological evolution has discovered some powerful design principles of which we humans are still ignorant. If so, then eventually we may learn how to cheaply make CAS that don’t rot. But in this case, why doesn’t evolution use those anti-rot design principles to create individual organisms that don’t decay or age? Evolution seems to judge it much more cost effective to make individual organisms that rot. A more likely hypothesis is that there is no cheap way to prevent rot; evolution has just continually paid a large cost to prevent rot. Perhaps early on, some species didn’t pay this cost, and won for a while. But eventually they died from rot, leaving only non-rotting species to inherit the Earth. It seems there must be some level in a system that doesn’t rot, if it is to last over the eons, and selection has ensured that the life we now see has such a level.

If valid, this perspective suggests a few implications for the future of life and civilization. First, we should seriously worry about which aspects of our modern civilization system are rotting. Human culture has lasted a million years, but many parts of our modern world are far younger. If the first easiest version of a system that we can find to do something is typically be a rotting system, and if it takes a lots more work to find a non-rotting version, should we presume that most of the new systems we have are rotting versions? Farming-era empires consistently rotted; how sure can we be that our world-wide industry-era empire isn’t similarly rotting today? We may be accumulating a technical debt that will be expensive to repay. Law and regulation seem to be rotting; should we try to induce a big refactoring there? Should we try to create and preserve contrarian subcultures or systems that are less likely to crash with the dominant culture and system?

Second, we should realize that it may be harder than we thought to switch to a non-biological future. We humans are now quite tied to the biosphere, and would quickly die if biology were to die. But we have been slowly building systems that are less closely tied to biology. We have been digging up materials in mines, collecting energy directly from atoms and the Sun, and making things in factories. And we’ve started to imagine a future where the software in our brains is copied into factory-made hardware, i.e., ems, joined there by artificial software. At which point our descendants might no longer depending on biological systems. But replacing biological systems with our typically rotting artificial systems may end badly. And making artificial systems that don’t rot may be a lot more expensive and time-consuming that we’ve anticipated.

Some imagine that we will soon discover a simple powerful general learning algorithm, which will enable us to make a superintelligence, a super-smart hyper-consistent eternal mind with no internal conflicts and an arbitrary abilities to indefinitely improve itself, make commitments, and preserve its values. This mind would then rule the universe forever more, at least until it met its alien equivalent. I expect that these visions have not sufficiently considered system rot, among other issues.

In my first book I guessed that during the age of em, individual ems would become fragile over time, and after a few subjective centuries they’d need to be replaced by copies of fresh scans of young humans. I also guessed that eventually it would become possible to substantially redesign brains, and that the arrival of this ability might herald the start of the next age after the age of em. If this requires figuring out how to make non-rotting versions of these new systems, the age of em might last even longer than one would otherwise guess.

GD Star Rating
loading...
Tagged as: , ,

Why Not Thought Crime?

I was alarmed to find a quotation supporting child rapists falsely attributed to me & going viral on Twitter. … messages shaming me for supporting child rapists. … I tweeted a clarification about the falsehood to no avail. (more)

Galileo’s Middle Finger is one American’s eye-opening story of life in the trenches of scientific controversy. … Dreger began to realize how some fellow progressive activists were employing lies and personal attacks to silence scientists whose data revealed uncomfortable truths about humans. In researching one such case, Dreger suddenly became the target of just these kinds of attacks. (more)

In 1837 Abraham Lincoln wrote about lynching and “the increasing disregard for law which pervades the country—the growing disposition to substitute the wild and furious passions in lieu of the sober judgment of courts, and the worse than savage mobs for the executive ministers of justice”. (more)

For a million years, humans lived under mob rule. We gossiped about rule violations, and then implemented any verdicts as a mob. Mob rule worked well enough in forager bands of population 20-50, but less well in farming era village areas of population 300-3000, and they work even worse today. Instead of a single unified conversation around a campfire, where everyone could be heard, larger mob conversations fragment into many separated smaller conversations. As the accused doesn’t have time or access to defend themselves in these many discussions, most in the mob only hear other voices. So mob rule comes down to whether most others are inclined to speak well or ill of the accused. And, alas, for an accused that many don’t like, mob members are often more eager to display personal outrage at anyone who might do what was accused, than they are to determine if the accused was actually guilty.

And so we developed law. When someone was accused of a violation, a legal authority authorized an open debate between the accused and a focal accuser. While such debates had many flaws, they had the great virtue of giving substantial and roughly equal time to an accuser and the accused. Where a mob might accept false accusations and false claims of innocence because they are not willing to listen to long detailed explanations, law listens more, and thus can eliminate many mistaken conclusions. Today, when an official prosecutor is assigned the task of convicting as many criminals as possible, the fact that this prosecutor declines to prosecute a particular accusation is often reasonably taken as exoneration.

However, we still use mob rule today for people accused of things that are widely socially disapproved, but not illegal. While the mob’s verdict is not enforced directly via law, punishments can still be severe, such as loss of jobs and friends, and even illicit violence. Which raises the obvious question: why not make mob-disapproved behaviors illegal, so that law can overcome the problem of error-prone fragmented mob conversations? If the official legal punishment were set to be comparable to what would have been the mob punishment, isn’t it a net win to use a more accurate process of determining guilt?

You might think that mobs shouldn’t be censuring so many things, but unless you are willing to more actively discourage such mobs, the real choice may be between mob and legal adjudication. Legal adjudication of an accusation does seem to cut the eagerness for mob rule on it, even if this doesn’t always eliminate mob activity. You might say that law has costs, and so should be reserved for big enough harms. But obviously mobs think these acts are big enough to bother to organize to censure them. The cost of making mobs seems at least comparable to the cost of using law. You might note that accusations are often hard to prove, but we make many things illegal that are hard to prove. If law can’t prove an accusation well enough to declare guilt, why trust an even more error-prone mob process to determine guilt? If you think that the errors of mobs declaring guilt are tolerable even when the law refuses to declare guilt, then you think law demands overly strong proofs. If so, we should change legal standards of proof to fix that.

It makes more sense to use mobs when society is honestly split into groups that differ on which acts should be approved or disapproved. For example, if one big group thinks people should be praised for promoting economic growth, while another similar sized group thinks people should be censured for promoting economic growth, then we may not want our legal system to take a side in this dispute. But mob rule today often censures people for things of which almost everyone disapproves. Like strong racism or sexism, or promoting rape. If over 99 percent of citizens disapprove of some behavior, maybe it is time to introduce official legal sanctions against that behavior.

At least twice in my life I’ve been subject to substantial mob rule censure. Fifteen years ago my DARPA-funded project was publicly accused by two senators of encouraging people to bet on the deaths of allies; the next morning the Secretary of Defense announced before Congress that my project was cancelled. In the last month, I was accused of promoting rape, and widely censured for that, receiving many hostile messages and threats, and having people and groups cut off public association with me.

In both cases I’m confident that law-like debate would have exonerated me. My DARPA-funded project, Policy Analysis Market, was going to have bets on geopolitical instability in the Mideast, not terror attacks. (Over 500 media articles mentioned the project in the coming years, and articles that knew more liked it more.) And recently I asked why there is so little overlap between those who seek more income and sex redistribution. I didn’t advocate either one, and “redistribute” just mean “change the distribution” (look it up in any dictionary); there are as many ways to change the distribution of sex without rape as there are to change the distribution of income without using guillotines like in the French revolution. (Eight years ago I also compared another bad thing to rape, to say how bad that other thing might be, not to say rape is good.)

I would personally have been better off had these things been thought crimes, as I could have then more effectively defended myself against false accusations. And I’ve learned of many other cases of mob rule punishing people based on false accusations. So I am led to wonder: why not thought crime? It might not be the best of all possible worlds, but couldn’t it be better than the mob rule we now use?

Added 7a: When mobs have mattered, the choice has often been between sufficiently suppressing them or creating laws that substitute for what they would have done. See some history.

GD Star Rating
loading...

Revival Prizes

Cryonics is the process of having your body frozen when current medicine gives up on you, and calls you “dead”, in the hope of being revived later using much better future medicine. Even though cryonics has been available for many decades, and often receives free international publicity, only ~3000 people have signed up as customers, and only ~400 people have been frozen. I’m one of those customers. While many customers hope to have their current physical body fixed and restored to youthful health, I’m mainly hoping to be revived as an em, which seems to me a vastly easier (if still very hard) task.

Imagine you plan to become a cryonics patient, and hope for an eventual successful revival. Along this path many important decisions will need to be made: level of financial investment into the whole process, timing and method of preservation, method and place of storage, strategies of financial asset investment, and final timing and method of revival and reintegration into society. Through most of this process you will not be available to make key decisions, though after success you might be able to give an evaluation of the choices that were made on your behalf. So you will need to delegate many of these choices to agents who make these choices for you. How can you set up your relation to such agents to give them the best possible incentives to make good choices?

Several US states allow you to deposit money into a “trust”, which then can grow indefinitely by reinvestment without paying taxes on investment gains, even after you are officially dead. The usual legal process is to assign an “administrator” to manage the trust. Usually, you write down your preferences in words, and then pay this agent a constant percentage of your current assets to follow your instructions. In theory they do what you wanted out of fear of being sued. Unfortunately, its hard to prove a violation, and few would have the incentive to bother. This gives your agent the incentive to minimize all spending except reinvestment of the assets, or to divert spending or investments to parties who pay them a kickback. Either way, not a great system.

Here’s an improvement. Pay the agent only some fraction of the money left over in the fund after you are successfully revived. A prize for revival. Then they never get anything until you get what you wanted. Of course this requires some legal way to determine that you have in fact been revived. Instead of, for example, being replaced with some crude simulation of you. This approach seems better than the previous one, but there’s still the problem that this prize incentive makes them want to wait too long. Why risk any chance of failure, and why pay a high cost for revival, if you can just wait longer to raise the chance of success and lower the cost? So this agent will get it done eventually, but may wait too long. And they might not revive you they way you wanted.

One simple fix is that, once you are revived, you rate the whole process on a 0 to 100 scale, and your agent only gets that percentage of the max possible prize. (Maybe also guarantee that they get some min faction.) The rest of the prize can’t go to you, or your incentives are bad. So the rest of the prize would have to go to some specified charity, perhaps a pool of assets to help all other cryonics customers still not yet revived. Your agent will then try to make choices so that you will rate them highly after you are revived. You can expect them to choose a revival process where they give themselves advantages in convincing you that they did a good job. Perhaps even mind control. So steel yourself to be skeptical. They might also discretely threaten to “accidentally” lose you if you don’t pay them the full prize. So beware of that.

You might be able to do just a bit better by committing to a schedule by which the maximum prize your agent could win declines as a fraction of the total assets remaining after revival. Such a decline would encourage the agent to not wait too long to revive you. But if you don’t know the relevant rates of future change, how can you robustly define such a prize fraction decline? One robust measure available is the number of people who have been successfully revived so far. Your schedule of decline might not even start until at least one person has been revived, and then decline as some function of the number revived so far. Perhaps the function could be a simple power law. So you could specify how eager you are to be one of the first people revived.

So here’s my final proposal. You choose how much money to deposit in a trust, you write down your preferences as best you know them now, and you pick an agent who agrees to manage your trust, and make key storage and revival decisions. You agree to pay them some percent of current assets per year (preferably zero), and some max fraction of final remaining assets after revival to pay them as a prize. This max fraction follows some simple declining function of the number of people revived so far at that time. Perhaps a power law. And you have the discretion when revived to pay them less than this max value, with the remainder going to a specified charity. You initially choose the key parameters of this system to reflect your personal preferences, as best you can.

This is of course far from perfect. Problems remain, such as of kickbacks, theft, fake revival, and mind control. So there could be a place for a larger encompassing organization to watch out for and avoid such problems. And to publish stats on revivals and attempts so far. This larger organization could approve the basic range of reasonable options from which agents could choose at any one time, and have extra powers to monitor and overrule rogue agents. But it should mostly defer to the judgements of individual agents.

I can imagine a futarchy-based variation, where the “agent” is a pool of speculators who bet on shares of the final prize, conditional on making particular choices. This would cut the problem of random variation in the quality and even sanity of individual agents. But I can’t claim that futarchy is well enough tested now to make this a reasonable option if you are making these choices right now. However, I’d love to help a group do such testing, to see if it can become a viable option sooner.

Added 10:30a: It could also make sense to make your declining prize fraction function depend on the ratio of successful revivals so far to attempts that fail so badly as to make future revival seems impossible.

GD Star Rating
loading...
Tagged as: