23 Comments
User's avatar
Peter Gerdes's avatar

This is one of the reasons that a belief in FOOM (fast AI takeoff) is incompatible with the standard arguments the AI alignment problem is hard or even impossible. Digital minds need to worry about drift as well.

If the problem is hard then we should expect it will be hard for the first super human AI we create. The arguments that we face an alignment problem depend on establishing that this AI will have goals it wants to achieve (the concern is they differ from ours) but that AI will then refrain from creating improved versions of itself because it won't be able to ensure they are aligned with it.

Expand full comment
Peter Gerdes's avatar

Obviously, it could be that the alignment problem is within the capability of some AI to solve but is very difficult for us or that somehow the AI will have an advantage over us in ensuring it's successor behaves as it desires.

However, you can't defend that kind of position with general arguments claiming to show AI alignment is hard -- you need arguments that specifically show that we should think it's hard for us but will be tractable for the first generation of super human AI we create.

Expand full comment
Michael J. McGuffin's avatar

"digital minds [...] could succumb to poverty, disease, and war" Aren't these easily prevented in a virtual world, especially disease?

Expand full comment
Robin Hanson's avatar

Heard of computer viruses?

Expand full comment
warty dog's avatar

viruses, mutations, rot are in principle preventable for software. so it's weird to me when you seem to be confident they will happen in silicon world

Expand full comment
Tim Tyler's avatar

Part of the issue is whether future organisms will be prepared to pay the cost. Senescence, for example is preventable - in principle - but typically through extra expenditure on "maintenance" - at the expense of reproduction. Will future organisms be prepared to pay? If they do will they be out-competed by others who do not pay? I don't have all the answers here, but hopefully this illuminates some of the issues.

Expand full comment
warty dog's avatar

typo: have may have

Expand full comment
Robin Hanson's avatar

fixed.

Expand full comment
ron katz's avatar

seems like gibberish...but keep on trucking

Expand full comment
Andy G's avatar

“their wages would quickly fall to subsistence levels. At which point they could succumb to poverty, disease, and war as easily as did our ancestors of several centuries ago.”

For the sake of discussion, I’m willing to assume the first point (“subsistence” wages). Why in the world in such a massively wealthier economy is it a reasonable assumption that poverty, war, disease, follow - even for a biological “mind”, let alone a digital one?

[assuming, of course, more or less today’s regulation and social safety net; sure in a society with literally zero safety net what you are proposing would of course be quite plausible.]

You seem to have fallen into the “10th Commandment” trap, asserting that income inequality is an inherent bad. Please explain why it would be anything but a wonderful thing - at least re: poverty, war, disease axes - for almost everyone to have a “subsistence” income 100x above today’s [U.S.] poverty line.

Because most people in that society wouldn’t be able to buy great seats for their Taylor Swift’s live performance (or whatever the status symbols of the day are)??

I genuinely don’t get it.

Expand full comment
Jack's avatar

This is all very speculative of course. But I wonder if digital minds might have more leeway to form their own cultures that go against the grain of mainline culture. For one, they will be walled off from flesh-and-blood humans, somewhat like the various cultures of the world were before easy travel existed. It seems that the smaller the surface area between two cultures, the more that differences are tolerated. Also, some fraction of physical people likely won't consider digital minds "real", and thus not worthy of the same degree of moral outrage when norms are violated.

Expand full comment
Joseph Bronski's avatar

Drift acting on memotypes should be lower than ever because the population is higher than ever. Why do you seem so sure about 'culture' causing all of the woes of modernity? I looked at the evidence and concluded the most probable cause of civilization decline is mutational load and dysgenics.

Expand full comment
Robin Hanson's avatar

Most cultural drift isn't statistical fluctuations, as with DNA drift. https://www.overcomingbias.com/p/beware-moral-fashion

Expand full comment
Tim Tyler's avatar

Why use the term "drift", at all, then? Aren't a lot of people going to assume the population genetics term is being applied to memes - and then misunderstand what you are saying?

Expand full comment
Robin Hanson's avatar

What is a better term?

Expand full comment
Tim Tyler's avatar

Here is your own analysis:

"It seems “maladaptive culture” might be a better name for the problem."

- https://www.overcomingbias.com/p/i-fear-maladaptive-culture

I thought that was an improvement over the earlier and later posts mentioning "drift". Perhaps it could be criticized as being more long-winded. I would probably use "bad memes" instead. Anyway, most workers in the field don't seem to have this "drift"-related terminology problem. I still think you could do better than use the "drift" term - and then have to explain that you mean an unusual, non-random version.

Expand full comment
Joseph Bronski's avatar

If you use an infinitesimal model of memetics then drift pops out as sampling error just like in population genetics. How are you defining and measuring it?

Expand full comment
Tim Tyler's avatar

So: drift is a pretty fundamental part of cultural evolution - just as with the evolution of DNA genes.

If in the future we decide that we don't like some of the side effects of drift then maybe we can use memetic engineering, self-directed evolution and evaluation under simulation in an attempt to minimize some of the more undesirable side effects.

Expand full comment
Robin Hanson's avatar

That's not how the problem works. Culture tells us what we like, and we believe it, even if it is maladaptive.

Expand full comment
Tim Tyler's avatar

So: you claim to at least be able to see a problem in this area without being brainwashed by culture into thinking the changes are all wonderful. If you can see a problem in this area, then you can probably entertain the hypothesis that others will be able to see it too.

Maybe those people won't be in charge - or won't get their way for other reasons. That certainly seems possible.

Expand full comment
Andy G's avatar

Ok, but *possible* doesn’t mean “likely”, let alone “probable”

Expand full comment
Bassoe's avatar

You want economist Robin Hanson’s The Age of Em: Work, Love and Life when Robots Rule the Earth for a look at this from a class reductionist perspective. Namely that if automated substitutes are more capable than humans, or at least capable of comparable work at a lower price due to having lesser needs, capitalism will act in their favor not ours.

Premise, assume that it becomes possible to create virtual Emulations functionally equivalent to human Minds, the titular “Em”. These may be pure Artificial Intelligence or uploaded human consciousnesses, doesn’t really matter for the sake of argument.

Furthermore, assume Ems would be more capable employees than standard humans, if only since they could be ran at computational rather than biological speeds and skilled individual Ems could be copied to save on training, but require less resources. An Em would rent storage and processing to run themselves in a server farm and “own” subscriptions to virtual goods, working constantly to keep ahead of their debt.

Meanwhile, human employees, facing zero-sum competition with more capable and cheaper Em alternatives, can’t live off what an Em would consider livable wages because their actual food and housing have higher production costs than copy/paste. Therefore, they have no choice but to likewise upload their minds to become Ems or starve as economically redundant.

Needless to say, being an economist, Robin Hanson likes the nightmarish dystopia he’s envisioning, though I can't imagine any normal human being would.

Expand full comment
Scott's avatar

If cost of living falls faster than wages the problem doesn't occur.

Expand full comment