Discussion about this post

User's avatar
Riothamus's avatar

I may be mistaken, but I think the drift here means ‘relative to what we believed them to be’.

There isn’t much to distinguish something that is directly random from something that is bounded in its randomness but the bound is unknown.

Expand full comment
David Krueger's avatar

"In the context of foom, the usual AI concern is a total loss of control of the one super AI, whose goals quickly drift to a random point in the space of possible goals."

That seems very wrong to me. The concern is not about goals drifting; it is about them being relentlessly pursued. What am I missing?

"From this view, those tempted to spend resources now on studying AI control should consider two reasonable alternatives. The first alternative is to just save more now to grow resources to be used later, when we understand more. The second alternative is to work to innovate with our general control institutions, to make them more robust, and thus better able to handle larger coordination scales, and whatever other problems the future may hold. (E.g., futarchy.)"

These seem like reasonable options to consider in any case. I tend to think that solving global coordination is essential. I think most AIS-ppl are not focused on that because it seems intractable to them, so even if the chances of technical success seem low, it still seems like a better bet.

Expand full comment
31 more comments...

No posts

Ready for more?