Discussion about this post

User's avatar
Overcoming Bias Commenter's avatar

@hopefully anonymous: yes, that was exactly the impression I got when reading Aaronson's post. For example, he gives timeframes of "centuries or even millenia" for human equivalent AI with no justification whatsoever. Scott: if you make specific predictions about the future, you should have a justification for them. Aaronson's post looks like another example of emotionally motivated, pessmistic careless futurism, which eliezer dealt with in his bloggingheads interview with john Horgan.

Expand full comment
Overcoming Bias Commenter's avatar

I pretty much agree with Robin in the OP (I haven't read all the comments yet). Scott's argument as quoted here seems suspiciously kitchen sink to me: he seems to start with an understandable emotional desire not to see Kurzweil's singularity happen yet, and then throws a variety of unconnected but emotionally aligned reasons for it not to happen. In contrast I think Robin gets it right.

I do think history suggests the end of the world, immortality, etc. is not in our lifetime any more than it was in Ponce de Leon or Charles Lindbergh's lifetime, but there is some serious counterevidence in our time, such as the gap between the predicted time to complete the human genome project and its actual completion.

Expand full comment
22 more comments...

No posts