I’ve long enjoyed the science fiction novels of Charlie Stross, so I’m honored that he linked to my Betterness Explosion from his Three arguments against the singularity: I periodically get email from folks who, having read “Accelerando”, assume I am some kind of fire-breathing extropian zealot who believes in the imminence of the singularity, the uploading of the libertarians, and the rapture of the nerds. … It’s time to set the record straight. … Santa Claus doesn’t exist. …
strong on emotion, but weak on argument
He certainly is a polyhistor, but sadly this applies to many a statement of his (or it's just me still being cross with him for getting snarky at a nuclear power pro/contra). I as a layman can't say much about the feasibility of emulating a complex neural network or other singularity-related topics, but his argumentation sounds reasonable, though.
What I wonder is, if smart people should figure out a smart process of creating a superhuman or even "god-like" AI, how will they guarantee that its agenda is congruent with my and the rest of humanity's well-being?
As he explains in the comments, he's sick of fat singularity fanboys pestering him because of his work:
I'm not convinced that the singularity isn't going to happen. It's just that I am deathly tired of the cheerleader squad approaching me and demanding to know precisely how many femtoseconds it's going to be until they can upload into AI heaven and leave the meatsack behind.(Maybe if they paid more attention to meatsack maintenance in the short term they'd have a better chance of surviving to see the rapture of the nerds -- and even enjoy the meatsack experience along the way.)Moravec's writing is what turned me on to transhumanism in the first place, in the late 1980s/early 1990s.
" how long nearly unmodified uploads would dominate, or just how far from humans would be the most competitive creatures"
These are all things that seem poorly integrated in your own writings.
Luke: he considers cryonics “faith-based” and “barely more practical than the Ancient Egyptian mummy-makers”.
Would this statement apply to all cryonics that might feasibly be achieved within a human lifetime per his prediction, or is it limited to cryonics achieved to date? This is a crucial, frequently glossed-over distinction.
I'm sure you've already addressed this somewhere, but I'm combining some of your thoughts, and this is what I'm seeing:
Ems would be held at subsistence-level rates. One of the advances that would allow for mass emulation is the singularity. Would it not follow that if the singularity is possible, then it is likely that it has already happened, and that we are all subsistence-level ems within a simulation run by the singularity-intelligence?
Luke: he considers cryonics "faith-based" and "barely more practical than the Ancient Egyptian mummy-makers".
I think "personal revulsion against affiliating with singularity fans" is a bit extreme. More like annoyance with singularity zealots.
He seems to regard the topics as interesting things to talk about, just not likely to happen anytime soon, and certainly not worth planning your life around.
If he truly didn't want to interact with singularity fans, I think he's smart enough to know better than to write blog posts on the subject.
... It’s unwise to live on the assumption that they’re coming down the pipeline within my lifetime.
Is this an endorsement of cryonics?
I think it's pretty clear that Stross does disagree with singularity fans. He thinks both hand-coded human-equivalent AIs and EMs are a long way off, not at all in the foreseeable future.
Personally, given my current knowledge base, I'd predict that we'll get EMs long before we get human-equivalent AI, and that modifying the EMs into something better will be a long, slow process.
Do you think otherwise?