Tag Archives: complexity

More Than Death, Fear Decay

Most known “systems” decay, rot, age, and die. We usually focus on the death part, but the more fundamental problem is decay (a.k.a. rotting, aging). Death is almost inevitable, as immortality is extremely difficult to achieve. Systems that don’t decay can still die; we sometimes see systems where the chance of death stays constant over time. But for most complex systems, the chance of death rises with time, due to decay.

Many simple physical systems, like chairs, decay because the materials of their parts decay. Such systems can often be rejuvenated by replacing those materials. More generally, simple modular systems can be rejuvenated by replacing the modular parts that decay. For example, it is possible to spend enough to maintain most cars and buildings indefinitely in a nearly original condition, though we rarely see this as worth the bother.

Complex adaptive systems (CAS), such as firms, have many parts in complex relations, relations that change in an attempt to adapt to changing conditions. When a CAS changes its design and structure to adapt, however, this rarely results in modular sub-designs that can be swapped out. Alas, the designs of most known CAS decay as they adapt. In biological organisms this is called “aging”, in software it is called “rot”, and in product design this is called the “innovators dilemma”. Human brains change from having “fluid” to “crystalized” intelligence, and machine learning systems trained in one domain usually find it harder to learn quite different domains. We also see aging in production plans, firms, empires, and legal systems. I don’t know of data on whether things like cities, nations, professions, disciplines, languages, sports, or art genres age. But it isn’t obvious that they don’t also decay.

It is not just that it is easier to create and train new CAS, relative to rejuvenating old ones. It seems more that we just don’t know how to prevent rot at any remotely reasonable cost. In software, designers often try to “refactor” their systems to slow the process of aging. And sometimes such designers report that they’ve completely halted aging. But these exceptions are mostly in systems that are small and simple, with stable environments, or with crazy amounts of redesign effort.

However, I think we can see at least one clear exception to this pattern of rotting CAS: some generalist species. If the continually changing environment of Earth caused all species to age at similar rates, then over the history of life on Earth we would see a consistent trend toward a weaker ability of life to adapt to changing conditions. Eventually life would lose its ability to sufficient adapt, and life would die out. If some kinds of life could survive in a few very slowly changing garden environments, then eventually all life would descend from the stable species that waited unchanging in those few gardens. The longer it had been since a species had descended from a stable garden species, the faster that species would die out.

But that isn’t what we see. Instead, while species that specialize to particular environments do seem to go extinct more easily, generalist species seem to maintain their ability to adapt across eons, even after making a great many adaptations. Somehow, the designs of generalist species do not seem to rot, even though typical organisms within that species do rot. How do they do that?

It is possible that biological evolution has discovered some powerful design principles of which we humans are still ignorant. If so, then eventually we may learn how to cheaply make CAS that don’t rot. But in this case, why doesn’t evolution use those anti-rot design principles to create individual organisms that don’t decay or age? Evolution seems to judge it much more cost effective to make individual organisms that rot. A more likely hypothesis is that there is no cheap way to prevent rot; evolution has just continually paid a large cost to prevent rot. Perhaps early on, some species didn’t pay this cost, and won for a while. But eventually they died from rot, leaving only non-rotting species to inherit the Earth. It seems there must be some level in a system that doesn’t rot, if it is to last over the eons, and selection has ensured that the life we now see has such a level.

If valid, this perspective suggests a few implications for the future of life and civilization. First, we should seriously worry about which aspects of our modern civilization system are rotting. Human culture has lasted a million years, but many parts of our modern world are far younger. If the first easiest version of a system that we can find to do something is typically be a rotting system, and if it takes a lots more work to find a non-rotting version, should we presume that most of the new systems we have are rotting versions? Farming-era empires consistently rotted; how sure can we be that our world-wide industry-era empire isn’t similarly rotting today? We may be accumulating a technical debt that will be expensive to repay. Law and regulation seem to be rotting; should we try to induce a big refactoring there? Should we try to create and preserve contrarian subcultures or systems that are less likely to crash with the dominant culture and system?

Second, we should realize that it may be harder than we thought to switch to a non-biological future. We humans are now quite tied to the biosphere, and would quickly die if biology were to die. But we have been slowly building systems that are less closely tied to biology. We have been digging up materials in mines, collecting energy directly from atoms and the Sun, and making things in factories. And we’ve started to imagine a future where the software in our brains is copied into factory-made hardware, i.e., ems, joined there by artificial software. At which point our descendants might no longer depending on biological systems. But replacing biological systems with our typically rotting artificial systems may end badly. And making artificial systems that don’t rot may be a lot more expensive and time-consuming that we’ve anticipated.

Some imagine that we will soon discover a simple powerful general learning algorithm, which will enable us to make a superintelligence, a super-smart hyper-consistent eternal mind with no internal conflicts and an arbitrary abilities to indefinitely improve itself, make commitments, and preserve its values. This mind would then rule the universe forever more, at least until it met its alien equivalent. I expect that these visions have not sufficiently considered system rot, among other issues.

In my first book I guessed that during the age of em, individual ems would become fragile over time, and after a few subjective centuries they’d need to be replaced by copies of fresh scans of young humans. I also guessed that eventually it would become possible to substantially redesign brains, and that the arrival of this ability might herald the start of the next age after the age of em. If this requires figuring out how to make non-rotting versions of these new systems, the age of em might last even longer than one would otherwise guess.

GD Star Rating
loading...
Tagged as: , ,

Organic Prestige Doesn’t Scale

Some parts of our world, such as academia, rely heavily on prestige to allocate resources and effort; individuals have a lot of freedom to choose topics, and are mainly rewarded for seeming impressive to others. I’ve talked before about how some hope for a “Star Trek” future where most everything is done that way, and I’m now reading Walkaway, outlining a similar hope. I was skeptical:

In academia, many important and useful research problems are ignored because they are not good places to show off the usual kinds of impressiveness. Trying to manage a huge economy based only on prestige would vastly magnify that inefficiency. Someone is going to clean shit because that is their best route to prestige?! (more)

Here I want to elaborate on this critique, with the help of a simple model. But first let me start with an example. Imagine a simple farming community. People there spend a lot of time farming, but they must also cook and sew. In their free time they play soccer and sing folk songs. As a result of doing all these things, they tend to “organically” form opinions about others based on seeing the results of their efforts at such things. So people in this community try hard to do well at farming, cooking, sewing, soccer, and folk songs.

If one person put a lot of effort into proving math theorems, they wouldn’t get much social credit for it. Others don’t naturally see outcomes from that activity, and not having done much math they don’t know how to judge if this math is any good. This situation discourages doing unusual things, even if no other social conformity pressures are relevant.

Now let’s say that in a simple model. Let there be a community containing people j, and topic areas i where such people can create accomplishments aij. Each person j seeks a high personal prestige pj = Σi vi aij, where vi is the visibly of area i. They also face a budget constraint on accomplishment, Σi aij2 ≤ bj. This assumes diminishing returns to effort in each area.

In this situation, each person’s best strategy is to choose aij proportional to vi. Assume that people tend to see the areas where they are accomplishing more, so that visibility vi is proportional to an average over individual aij. We now end up with many possible equilibria having different visibility distributions. In each equilibria, for all individuals j and areas i,k we have the same area ratios aij / akj = Vi/ Vk.

Giving individuals different abilities (such as via a budget constraint Σi aij2 / xij ≤ bj) could make individual choose somewhat different accomplishments, but the same overall result obtains. Spillovers between activities in visibility or effort can have similar effects. Making some activities be naturally more visible might push toward those activities, but there could still remain many possible equilibria.

This wide range of equilibria isn’t very reassuring about the efficiency of this sort of prestige. But perhaps in a small foraging or farming community, group selection might over a long run push toward an efficient equilibria where the high visibility activates are also the most useful activities. However, larger societies need a strong division of labor, and with such a division it just isn’t feasible for everyone to evaluate everyone else’s specific accomplishments. This can be solved either by creating a command and status hierarchy that assigns people to tasks and promotes by merit, or by an open market with prestige going to those who make the most money. People often complain that doing prestige in these ways is “inauthethnic”, and they prefer the “organic” feel of personally evaluating others’ accomplishments. But while the organic approach may feel better, it just doesn’t scale.

In academia today, patrons defer to insiders so much regarding evaluations that disciplines become largely autonomous. So economists evaluate other economists based mostly on their work in economics. If someone does work both in economics and also in aother area, they are judged mostly just on their work in economics. This penalizes careers working in multiple disciplines. It also suggests doubts on if different disciplines get the right relative support – who exactly can be trusted to make such a choice well?

Interestingly, academic disciplines are already organized “inorganically” internally. Rather than each economist evaluating each other economist personally, they trust journal editors and referees, and then judge people based on their publications. Yes they must coordinate to slowly update shared estimates of which publications count how much, but that seems doable informally.

In principle all of academia could be unified in this way – universities could just hire the candidates with the best overall publication (or citation) record, regardless of in which disciplines they did what work. But academia hasn’t coordinated to do this, nor does it seem much interested in trying. As usual, those who have won by existing evaluation criteria are reluctant to change criteria, after which they would look worse compared to new winners.

This fragmented prestige problem hurts me especially, as my interests don’t fit neatly into existing groups (academic and otherwise). People in each area tend to see me as having done some interesting things in their area, but too little to count me as high status; they mostly aren’t interested in my contributions to other areas. I look good if you count my overall citations, for example, but not if you only my citations or publications in each specific area.

GD Star Rating
loading...
Tagged as: , ,

Best Combos Are Robust

I’ve been thinking a lot lately about what a future world of ems would be like, and in doing so I’ve been naturally drawn to a simple common intuitive way to deal with complexity: form best estimates on each variable one at a time, and then adjust each best estimate to take into account the others, until one has a reasonably coherent baseline combination: a set of variable values that each seem reasonable given the others.

I’ve gotten a lot of informal complaints that this approach is badly overconfident, unscientific, and just plain ignorant. Don’t I know that any particular forecasted combo is very unlikely to be realized? Well yes I do know this. But I don’t think critics realize how robust and widely used is this best combo approach.

For example, this is the main approach historians use studying ancient societies. A historian estimating Roman Empire copper trade will typically rely on the best estimates by other experts on Roman population, mine locations, trade routes, travel time, crime rates, lifespans, climate, wages, copper use in jewelry, etc. While such estimates are sometimes based on relatively direct clues about those parameters, historians usually rely more on consistency with other parameter estimates. While they usually acknowledge their uncertainty, and sometimes identify coherent sets of alternative values for small sets of variables, historians mostly build best estimates on the other historians’ best estimates.

As another example, the scheduling of very complex projects, as in construction, is usually done via reference to “baseline schedules,” which specify a best estimate start time, duration, and resource use for each part. While uncertainties are often given for each part, and sophisticated algorithms can take complex uncertainty dependencies into account in constructing this schedule (more here), most attention still focuses on that single best combination schedule.

As a third example, even when people go to all the trouble to set up a full formal joint probability distribution over a complex space, as in a complex Bayesian network, and so would seem to have the least need to crudely avoid complexity by focusing on just one joint state, they still quite commonly want to compute the “most probable explanation”, i.e., that single most likely joint state.

We also robustly use best tentative combinations when solving puzzles like Sudoku, crossword, or jigsaw. In fact, it is hard to think of realistic complex decision or inference problems full of interdependencies where we don’t rely heavily on a few current best guess baseline combinations. Since I’m not willing to believe that we are so badly mistaken in all these areas as to heavily rely on a terribly mistaken method, I have to believe it is a reasonable and robust method. I don’t see why I should hesitate to apply it to future forecasting.

GD Star Rating
loading...
Tagged as: , ,

Does complexity bias biotechnology towards doing damage?

A few months ago I attended the Singularity Summit in Australia. One of the presenters was Randal Koene (videos here), who spoke about technological progress towards whole brain emulation, and some of the impacts this advance would have.

Many enthusiasts – including Robin Hanson on this blog – hope to use mind uploading to extend their own lives. Mind uploading is an alternative to more standard ‘biological’ methods for preventing ageing proposed by others such as Aubrey de Gray of the Methuselah Foundation. Randal believes that proponents of using medicine to extend lives underestimate the difficulty of what they are attempting to do. The reason is that evolution has led to a large number of complex and interconnected molecular pathways which cause our bodies to age and decay. Stopping one pathway won’t extend your life by much, because another will simply cause your death soon after. Controlling contagious diseases extended our lives, but not for very long, because we ran up against cancer and heart disease. Unless some ‘master ageing switch’ turns up, suspending ageing will require discovering, unpacking and intervening in dozens of things that the body does. Throwing out the body, and taking the brain onto a computer, though extremely difficult, might still be the easier option.

This got me thinking about whether biotechnology can be expected to help or hurt us overall. My impression is that the practical impact of biotechnology on our lives has been much less than most enthusiasts expected. I was drawn into a genetics major at university out of enthusiasm for ideas like ‘golden rice’ and ‘designer babies’, but progress towards actually implementing these technologies is remarkably slow. Pulling apart the many kludges evolution has thrown into existing organisms is difficult. Manipulating them to reliably get the change you want, without screwing up something else you need, even more so.

Unfortunately, while making organisms work better is enormously challenging, damaging them is pretty easy. For a human to work, a lot needs to go right. For a human to fail, not much needs to go wrong. As a rule, fiddling with a complex system is a lot more likely to ruin it than improve it. As a result, a simple organism like the influenza virus can totally screw us up, even though killing its host offers it no particular evolutionary advantage:

Few pathogens known to man are as dangerous as the H5N1 avian influenza virus. Of the 600 reported cases of people infected, almost 60 per cent have died. The virus is considered so dangerous in the UK and Canada that research can only be performed in the highest biosafety level laboratory, a so-called BSL-4 lab. If the virus were to become readily transmissible from one person to another (it is readily transmissible between birds but not humans) it could cause a catastrophic global pandemic that would substantially reduce the world’s population.

The 1918 Spanish flu pandemic was caused by a virus that killed less than 2 per cent of its victims, yet went on to kill 50m worldwide. A highly pathogenic H5N1 virus that was as easily transmitted between humans could kill hundreds of millions more.

GD Star Rating
loading...
Tagged as: , ,