54 Comments

Moral values are the deep-seated preferences of people, taken collectively.

Expand full comment

Godwin's Law doesn't apply to Soviet Russia.

Expand full comment

"there's been no work I'm aware of actually estimating the gains from creating an industrial singleton."

Actually, didn't Robin write about advantages of monopolies and oligarchies a while back (which is not the same as saying those would overall be better than a competitive system)? I think he said something along the lines of such big players spending more on R&D (though that's on paper and that there are tax incentives to blow up the on-paper R&D budget). Also technocracy proposes industrial singletons for different sectors of the economy.

The early Soviet Union was a weird place, with all kinds of economic experiments. Mostly though, when something went up it was a matter of doing less worse than the notoriously corrupt and underdeveloped Russian Empire before.

Expand full comment

Around 1900, many were told that the efficient scale of organization, coordination, and control was rapidly increasing, and nations who did not follow suit would be left behind. Many were also told that regulatory solutions were finally available for key problems of inequality and inefficient resource allocation.

Your interpretation,then, of the rise of state socialism is that it was based on an exaggeration of returns to scale.

[Who sold this line to "many"? Economists? Politicians?]

Here's an observation that supports your conclusion. Among hard socialists and communists, it is accepted as an apparent article of faith that an international command economy would produce enormous efficiencies. For all the efforts made to prove or disprove Marx's theory of the tendency of the rate of profit to decline, there's been no work I'm aware of actually estimating the gains from creating an industrial singleton. I've never heard even mentioned the need for actually studying the question.

On the other side of the argument, there are some remarkable successes by the early Soviet Union--also largely ignored these days, even by hard socialists--that might provide the best available evidence on the efficiencies of ultra-large-scale production.

Expand full comment

"And even if we were up to the task I’m not at all sure gains would be worth the quite substantial costs"

I strongly disagree here - if we are talking about trying to prevent a future filled with nothing but "worst case demon spawn"/paperclip AI/"hardscrapple frontier"-type entities. In fact I question your sanity for wondering whether preventing such a future is "worth the costs".

Now obviously there is a continuum of drifted values. Maybe people in the future will change values a little bit, e.g. a world in which prostitution was legal and something most people did on a sunday for a bit of extra cash/kudos, or a world in which people were actually all living in human-indistinguishable robotic bodies because that's the easiest way to extend lifespans.

But I think that the people who are pro-FAI are not ruling out the small drifts/changes. They're trying to rule out the big ones, the ones that are morally indefensible.

Also one should mention the idea of an FAI whose job is to split the universe up into separate parts and enforce peace and nonaggression between them; contemporary humans get one part, moderate transhumans get another, extreme posthumans gwt another part.

Expand full comment

The positions of those like EY, concerned about the fundamental values in the future, are truly religious worries. What do these folks think "moral values" are? Explicit moral values are only devices allowing us to function consistently in the face of the exigencies of decision fatigue. ( http://tinyurl.com/7dcbt7y ) To wish to impose our values the future is like insisting that the future use any other obsolete tool.

But RH, you are partly to blame! You've defended em society based on a version of utilitarian values. You, too, think moral beliefs are in some sense true or false. There's the mistake.

Expand full comment

Wei Dai, I think he's looking at value drift as potentially bad because of the intergenerational conflict it causes. So, the issue for RH (unlike you) is social dislocation, not "bad values."

Expand full comment

I don't have strong expectations on this, so I have avoided expressing opinions on it.

Expand full comment

the main difference is that you expect greatly accelerated rates of change soon, which result in greatly accelerated rates of value drift.

Do you also expect music and consciousness to disappear, but just at a slower rate? You seemed to imply otherwise when you wrote, "Nor do I think it obvious that value change will generically put us face-to-face with worst case daemon spawn."

If you do not expect consciousness and music to disappear (as long as we have descendents at all), then that seems to me to be at least a contender for the "main difference" between your position and that of the futurists that you're criticizing.

If you do expect consciousness and music to disappear eventually, then do you

(A) not consider this to be the "worst case" scenario? or

(B) agree that it is very bad, but think that efforts to prevent it with present regulation would somehow make things even worse?

Expand full comment

Just in case it flies under the radar, I'd like to point out Robin's subtle comp sci pun: *daemon* spawn.

Expand full comment

I still don't understand what your disagreement is with me. Let me try another tack. You wrote that strong coordination would be needed to control value drift (and seem to imply that it would be a good idea if only it's feasible and could be done at low cost). But why would we need that coordination in the first place? Isn't it because excess incentives to be first and externalized risks make people individually want to bulid AIs as soon as possible, without waiting for technologies that would allow faithful transmission of values from humans to AIs and to subsequent generations? If you disagree with this, how would you explain why an unregulated market would not provide the optimal amount of value drift?

Expand full comment

Could you be referring to a 1998 comment by Nick Bostrom (http://mindstalk.net/polyma... He seemed to say that humans shouldn't be allowed to expand faster than the singleton, but predicted that wouldn't cause delay.

Expand full comment

"The poster-children of regulatory overreach are 20th century totalitarian nations"

Godwin's Law.

Expand full comment

Sometime I'd like to look at literature mentioning the Collingridge dilemma and see what that says about our ability to limit preference change by controlling future technology.

http://criticaluncertaintie...

Expand full comment

All enduring regulating institutions (and thus modern values) seem to include regulations, such as the Golden Rule and the mandate to innovate, which serve to protect intergenerational value drift (http://grinfree.com/rules-a... ). I think you are correct that it would be challenging to construct a single AI today that implements these parts of our regulatory systems, but the problem is less with regulation than with the current state of AI. I appreciate your efforts to keep this issue on the radar.

Expand full comment

Wanting to take apart all humans for paperclip parts isn't just a bigger value change than the total of everything that has changed over human history so far, it is a change along a dimension that hasn't previously changed.

Expand full comment