Tag Archives: complexity

Organic Prestige Doesn’t Scale

Some parts of our world, such as academia, rely heavily on prestige to allocate resources and effort; individuals have a lot of freedom to choose topics, and are mainly rewarded for seeming impressive to others. I’ve talked before about how some hope for a “Star Trek” future where most everything is done that way, and I’m now reading Walkaway, outlining a similar hope. I was skeptical:

In academia, many important and useful research problems are ignored because they are not good places to show off the usual kinds of impressiveness. Trying to manage a huge economy based only on prestige would vastly magnify that inefficiency. Someone is going to clean shit because that is their best route to prestige?! (more)

Here I want to elaborate on this critique, with the help of a simple model. But first let me start with an example. Imagine a simple farming community. People there spend a lot of time farming, but they must also cook and sew. In their free time they play soccer and sing folk songs. As a result of doing all these things, they tend to “organically” form opinions about others based on seeing the results of their efforts at such things. So people in this community try hard to do well at farming, cooking, sewing, soccer, and folk songs.

If one person put a lot of effort into proving math theorems, they wouldn’t get much social credit for it. Others don’t naturally see outcomes from that activity, and not having done much math they don’t know how to judge if this math is any good. This situation discourages doing unusual things, even if no other social conformity pressures are relevant.

Now let’s say that in a simple model. Let there be a community containing people j, and topic areas i where such people can create accomplishments aij. Each person j seeks a high personal prestige pj = Σi vi aij, where vi is the visibly of area i. They also face a budget constraint on accomplishment, Σi aij2 ≤ bj. This assumes diminishing returns to effort in each area.

In this situation, each person’s best strategy is to choose aij proportional to vi. Assume that people tend to see the areas where they are accomplishing more, so that visibility vi is proportional to an average over individual aij. We now end up with many possible equilibria having different visibility distributions. In each equilibria, for all individuals j and areas i,k we have the same area ratios aij / akj = Vi/ Vk.

Giving individuals different abilities (such as via a budget constraint Σi aij2 / xij ≤ bj) could make individual choose somewhat different accomplishments, but the same overall result obtains. Spillovers between activities in visibility or effort can have similar effects. Making some activities be naturally more visible might push toward those activities, but there could still remain many possible equilibria.

This wide range of equilibria isn’t very reassuring about the efficiency of this sort of prestige. But perhaps in a small foraging or farming community, group selection might over a long run push toward an efficient equilibria where the high visibility activates are also the most useful activities. However, larger societies need a strong division of labor, and with such a division it just isn’t feasible for everyone to evaluate everyone else’s specific accomplishments. This can be solved either by creating a command and status hierarchy that assigns people to tasks and promotes by merit, or by an open market with prestige going to those who make the most money. People often complain that doing prestige in these ways is “inauthethnic”, and they prefer the “organic” feel of personally evaluating others’ accomplishments. But while the organic approach may feel better, it just doesn’t scale.

In academia today, patrons defer to insiders so much regarding evaluations that disciplines become largely autonomous. So economists evaluate other economists based mostly on their work in economics. If someone does work both in economics and also in aother area, they are judged mostly just on their work in economics. This penalizes careers working in multiple disciplines. It also suggests doubts on if different disciplines get the right relative support – who exactly can be trusted to make such a choice well?

Interestingly, academic disciplines are already organized “inorganically” internally. Rather than each economist evaluating each other economist personally, they trust journal editors and referees, and then judge people based on their publications. Yes they must coordinate to slowly update shared estimates of which publications count how much, but that seems doable informally.

In principle all of academia could be unified in this way – universities could just hire the candidates with the best overall publication (or citation) record, regardless of in which disciplines they did what work. But academia hasn’t coordinated to do this, nor does it seem much interested in trying. As usual, those who have won by existing evaluation criteria are reluctant to change criteria, after which they would look worse compared to new winners.

This fragmented prestige problem hurts me especially, as my interests don’t fit neatly into existing groups (academic and otherwise). People in each area tend to see me as having done some interesting things in their area, but too little to count me as high status; they mostly aren’t interested in my contributions to other areas. I look good if you count my overall citations, for example, but not if you only my citations or publications in each specific area.

GD Star Rating
Tagged as: , ,

Best Combos Are Robust

I’ve been thinking a lot lately about what a future world of ems would be like, and in doing so I’ve been naturally drawn to a simple common intuitive way to deal with complexity: form best estimates on each variable one at a time, and then adjust each best estimate to take into account the others, until one has a reasonably coherent baseline combination: a set of variable values that each seem reasonable given the others.

I’ve gotten a lot of informal complaints that this approach is badly overconfident, unscientific, and just plain ignorant. Don’t I know that any particular forecasted combo is very unlikely to be realized? Well yes I do know this. But I don’t think critics realize how robust and widely used is this best combo approach.

For example, this is the main approach historians use studying ancient societies. A historian estimating Roman Empire copper trade will typically rely on the best estimates by other experts on Roman population, mine locations, trade routes, travel time, crime rates, lifespans, climate, wages, copper use in jewelry, etc. While such estimates are sometimes based on relatively direct clues about those parameters, historians usually rely more on consistency with other parameter estimates. While they usually acknowledge their uncertainty, and sometimes identify coherent sets of alternative values for small sets of variables, historians mostly build best estimates on the other historians’ best estimates.

As another example, the scheduling of very complex projects, as in construction, is usually done via reference to “baseline schedules,” which specify a best estimate start time, duration, and resource use for each part. While uncertainties are often given for each part, and sophisticated algorithms can take complex uncertainty dependencies into account in constructing this schedule (more here), most attention still focuses on that single best combination schedule.

As a third example, even when people go to all the trouble to set up a full formal joint probability distribution over a complex space, as in a complex Bayesian network, and so would seem to have the least need to crudely avoid complexity by focusing on just one joint state, they still quite commonly want to compute the “most probable explanation”, i.e., that single most likely joint state.

We also robustly use best tentative combinations when solving puzzles like Sudoku, crossword, or jigsaw. In fact, it is hard to think of realistic complex decision or inference problems full of interdependencies where we don’t rely heavily on a few current best guess baseline combinations. Since I’m not willing to believe that we are so badly mistaken in all these areas as to heavily rely on a terribly mistaken method, I have to believe it is a reasonable and robust method. I don’t see why I should hesitate to apply it to future forecasting.

GD Star Rating
Tagged as: , ,

Does complexity bias biotechnology towards doing damage?

A few months ago I attended the Singularity Summit in Australia. One of the presenters was Randal Koene (videos here), who spoke about technological progress towards whole brain emulation, and some of the impacts this advance would have.

Many enthusiasts – including Robin Hanson on this blog – hope to use mind uploading to extend their own lives. Mind uploading is an alternative to more standard ‘biological’ methods for preventing ageing proposed by others such as Aubrey de Gray of the Methuselah Foundation. Randal believes that proponents of using medicine to extend lives underestimate the difficulty of what they are attempting to do. The reason is that evolution has led to a large number of complex and interconnected molecular pathways which cause our bodies to age and decay. Stopping one pathway won’t extend your life by much, because another will simply cause your death soon after. Controlling contagious diseases extended our lives, but not for very long, because we ran up against cancer and heart disease. Unless some ‘master ageing switch’ turns up, suspending ageing will require discovering, unpacking and intervening in dozens of things that the body does. Throwing out the body, and taking the brain onto a computer, though extremely difficult, might still be the easier option.

This got me thinking about whether biotechnology can be expected to help or hurt us overall. My impression is that the practical impact of biotechnology on our lives has been much less than most enthusiasts expected. I was drawn into a genetics major at university out of enthusiasm for ideas like ‘golden rice’ and ‘designer babies’, but progress towards actually implementing these technologies is remarkably slow. Pulling apart the many kludges evolution has thrown into existing organisms is difficult. Manipulating them to reliably get the change you want, without screwing up something else you need, even more so.

Unfortunately, while making organisms work better is enormously challenging, damaging them is pretty easy. For a human to work, a lot needs to go right. For a human to fail, not much needs to go wrong. As a rule, fiddling with a complex system is a lot more likely to ruin it than improve it. As a result, a simple organism like the influenza virus can totally screw us up, even though killing its host offers it no particular evolutionary advantage:

Few pathogens known to man are as dangerous as the H5N1 avian influenza virus. Of the 600 reported cases of people infected, almost 60 per cent have died. The virus is considered so dangerous in the UK and Canada that research can only be performed in the highest biosafety level laboratory, a so-called BSL-4 lab. If the virus were to become readily transmissible from one person to another (it is readily transmissible between birds but not humans) it could cause a catastrophic global pandemic that would substantially reduce the world’s population.

The 1918 Spanish flu pandemic was caused by a virus that killed less than 2 per cent of its victims, yet went on to kill 50m worldwide. A highly pathogenic H5N1 virus that was as easily transmitted between humans could kill hundreds of millions more.

GD Star Rating
Tagged as: , ,