It seems to me that it is up to [Eliezer] to show us how his analysis, using his abstractions, convinces him that, more likely than it might otherwise seem, hand-coded AI will come soon and in the form of a single suddenly super-powerful AI.
As this didn’t prod a response, I guess it is up to me to summarize Eliezer’s argument as best I can, so I can then respond. Here goes:
A machine intelligence can directly rewrite its entire source code, and redesign its entire physical hardware. While human brains can in principle modify themselves arbitrarily, in practice our limited understanding of ourselves means we mainly only change ourselves by thinking new thoughts. All else equal this means that machine brains have an advantage in improving themselves.
A mind without arbitrary capacity limits, that focuses on improving itself, can probably do so indefinitely. The growth rate of its "intelligence" may be slow when it is dumb, but gets faster as it gets smarter. This growth rate also depends on how many parts of itself it can usefully change. So all else equal, the growth rate of a machine intelligence must be greater than the growth rate of a human brain.
No matter what its initial disadvantage, a system with a faster growth rate eventually wins. So if the growth rate advantage is large enough then yes a single computer could well go in a few days from less than human intelligence to so smart it could take over the world. QED.
So Eliezer, is this close enough to be worth my response? If not, could you suggest something closer?
"So you think that Eliezer avoids answering certain questions, because a little answer is a dangerous thing?
I don't think that explains his behavior well. And if it did, it would mean that he views us all as children, incapable of understanding the finer points, let alone actually contributing something."
OB is broadcast to everyone?
Spambot wrote:
Blunt statements of shocking conclusions are not that productive when they turn people off from considering the reasoning and general logic.So you think that Eliezer avoids answering certain questions, because a little answer is a dangerous thing?
I don't think that explains his behavior well. And if it did, it would mean that he views us all as children, incapable of understanding the finer points, let alone actually contributing something.
Tim Tyler wrote:
I am not sure about that. Eliezer's proposed goal is a complicated extrapolation that neither I nor anyone else understands. Since the whole concept is pretty vague and amorphous, it seems rather difficult to say what it would actually do. Maybe it would kill people. However, you seemed to be claiming that it would be very likely to kill people.I think that the possibility that it may kill people should be acknowledged. I think that in takeoff scenarios of a middling speed it is more likely to kill people; and that Eliezer and others have prematurely assigned all such scenarios a probability of zero.
My motive in pointing this out is not to say that it may kill people and this would be bad. My motive is more along the lines of prodding people out of thinking "if we can make friendly AI then we will be saved".
My larger objective would be to point out that "we will be saved" is ill-defined; and that "saving humanity" will likely end up meaning something entailing the physical death of most humans, saving humans in a zoo with technological development forbidden, or something that we morally ought not to do.
The presentation of CEV is schizophrenic because of this. On one hand, it's supposed to save human values by extrapolating from them. On the other hand, Eliezer thinks that values are arbitrary; and whenever he talks about CEV, he talks about saving the future for meatly humans - as if the purpose of guiding the AI's development were not for its effects on the AI, but for its benefits for meat-humans. I don't know if this is confusion on his part, or a deliberate schizophrenia cultivated to avoid scaring off donors. Repeated questioning by me has failed to produce any statement from him to the effect that he imagines the humans he wants to save ever being anything other than they are today.
Now, maybe you have a better understanding of Eliezer's proposal than I do. However, the way the whole thing is formulated suggests you would have to be a superintelligent agent to actually make much sense of it. That makes it difficult for critics such as yourself to get much of a fix on the target.Eliezer is not a superintelligent agent. So your statement necessarily implies that CEV is nonsense.
I would have a much better understanding of Eliezer's proposal if he were willing to spend one-one-hundredth as much time answering simple questions about it, as he does writing about it.
But I also think I am done with it. I have wasted too much time already trying to bring about a review of someone else's ideas, when that person isn't even interested in having his ideas reviewed.
When people talk about the scientific method, they usually focus on the up-front part - making predictions and testing them. But another part of the scientific method is peer review. I can see how this would present problems for someone who imagines he has no peers. But "take it or leave it" is not operating within the scientific method.