59 Comments

Your comment is now famous! You are in Nick Bostrom's Superintelligence and he presents your comment in talks!

Expand full comment

In my case, since I was responding to the blog at foresight, my response was written based on what was there, and re-posted here for fairness. I do normally read through the entire blog post and commentary prior to responding. Your take in the issue per your responses on Foresight indicated disagreement with the need to go as quickly as we can in order to shorten the period of danger between now, and fully mature nanotech which can defend us against the dangers of misuse of nanotech.

Expand full comment

I do sometimes wonder how many folks comment on a post without reading it all.

Expand full comment

humm. It also seems that your take on it here is rather different than your commentary on Foresight indicated. While I stand by the need to speed, and agree completely with JoSH, it does not seem that you are in quite the same "take every precaution and safeguard" boat that Treder and Drexler have been in, Robin. Please note that I am addressing every "go slow" proponent in a general "you" and not "you", Robin, in specific.

Expand full comment

I posted this over at JoSH's blog response, but figured it would be best to post it here too

Dash to the future? Hell no. We need to strap a dozen JATO units to the back of the car, and punch it.

To Robin, K. Eric, Mike and all the other ultra cautious “let’s go slow and make a trillion safeguards” let me say something I have wanted to say to you all since I first read Engines.

Slow will kill us.

Our only hope is to ride the rocket. There are 6 billion people on the planet, and no two of them share the exact same ethics and morality as any other. One man’s evil is another man’s good. You want to take it slow, make ten million safety checks, make sure that every contingency has been planned for? Well, I pity you when Al’Qida prefects their nanobot that will kill you for not being Shi’ite Muslim, or their superbug that will slaughter every non Arab.

Simply put, ethics sounds nice. Morality sounds nice. Caution sounds nice. And it will get us all killed by the people who’s ethics, morality, and sense of what’s right and wrong are totally different than yours. Banning Stem Cell Research didn’t stop research in the rest of the world. Banning cloning didn’t stop it either.

You say “let’s all be friends and play nice together”, and they will say “We will bury you.”

Technology doesn’t wait on consensus. It doesn’t wait for everyone to agree on whether it should or shouldn’t be created. K. Eric had one thing right, you cannot put the Genie back in the bottle. The Genie is out, and it will serve whoever masters it first. At least if we become it’s master, there’s a better than even chance it will benefit the entire human race, but if we hem and haw and worry about how best to control the Genie, you can bet someone else will beat us to it. Japan? Not so worried, Giant Mecha would be cool. China? Not so certain there. The Taliban? We can kiss our asses goodbye.

We can’t afford to debate, we can’t afford to slow down, we can’t afford to do anything but full speed ahead and damn the torpedoes.

The sole consolation I have is the knowledge that for all your worry and caution, for twenty years now I’ve watched the makers and creators of the technology ignore you.

It may kill us, yes. We may grey goo ourselves, make sky-net, turn our planet into a new asteroid field, or any number of other horrible things. But it’s the only hope we have of getting out of childhood alive. We’ve been walking a razor’s edge between heaven and hell since Einstein thought up E=MC2, and we have had a sword hanging over our heads for all of our existence. Once Drexler proposed a means to create the salvation of our race, it should have been the sole project of all of science to make it happen.

We’re racing down an ever steeper slope to a future beyond imagining. Between us and it are a thousand pitfalls, terrorists, luddites, and crazies of all descriptions. If we slow down for even a fraction of a second, they will tear us from the sled and rip us to pieces. Speed is the only sane course. Some of us are going to die along the way. There’s nothing we can do about that, but the sooner we reach that light at the end of the path, the more of us will survive to enjoy our victory.

Expand full comment

You don't hear about it much, but we used to be a very environmentally unfriendly civilization, and that wasn't a good thing. See

http://en.wikipedia.org/wik...

I wouldn't be surprised if pundits played a role in changing that. I can't name any specific pundits that played a role, but I can't name any specific pundits from that era period.

Expand full comment

Improving the scope and quality of education would probably make things go faster, as would working to improve technology.

Expand full comment

It is surprisingly hard to get experimental proof to go the right way on the walk versus run in the rain thing:

Over a hundred yard course, it is better to walk in the rain than run in the rain, if your goal is to not get wet. The difference isn't huge, but over eight trials the running person got wetter.

The mythbuster guys were kind of embarrassed by this.

Expand full comment

Read more of Robin's upload work. I was converted.

Expand full comment

I really like this post, Robin. It's a big issue and it's hard to have something to say about it, so it wouldn't get talked about at all if you didn't come up with all these options and minefield analogies.

Expand full comment

I think that the idea of "computer-people" is a fallacy.

First, a historical analogy. Back in the early 20th century, a lot of SF writers were fascinated by the idea of humanoid robots. The human body is a good general purpose machine, and that's what they thought robots would look like.

We now know that robots don't work that way. Robots are built as special purpose machines. It's much more easy and efficient to build a robot that vacuums, or that operates a machine tool, or that stocks a warehouse, than to build a general purpose robot. A "printing robot" doesn't look like a metal man at a printing press, it looks like a printer.

The brain is also a complex system with many separate subfunctions. Some of these subfunctions (calculation, memory, data search) can be done today by machines better than by people. Some are still human dominated (facial recognition), and some haven't been replicated electronically yet. But I don't think the AI of the future will have all the same parts in the same proportions as the brain. I see AI as a set of tools for efficiently solving discrete sets of problems, more Roomba than C-3PO.

More specifically, I doubt that the brain subsystems that, in humans, pass for "free will" would often be mimicked by AI designers. Whoever builds the system will want the machine to satisfy its owner's needs, not go off on tangents of its own.

PS. The outer space environment contains a fair bit of high energy, difficult-to-shield radiation, and may not be, overall, much more congenial to semiconductor crystals than it is to biological life.

Expand full comment

Go to space once we're brain emulations running in computers. Simplifies things a lot.

Expand full comment

Democratizing power of the web eh?

Some arseholes no doubt thought their high IQs made them superior to everyone else; sadly for them, with the explosion of ever more powerful web-apps, what they didn't realize is that soon everyone else will in principle be their equal in terms of raw *optimization power* - for any specific domain you can now (or will soon be able) to find a web-app which duplicates the effect of optimization power (i.e. high iQ combined with decision making).

What can't be automated? Well, I think, as you mentioned, the ability to 'put everything together' - the ability to see the big picture and form analogies and cross-domain connections between things, I'm confident this will always require genuine creativity and consciousness (only a sentient AI could have these abilities).

So the *real* advantage is rapidly swinging away from near mode thinkers with a love of details and system (conventional high-IQers), and towards far mode thinkers with a love of the 'big picture' and with the ability to integrate multiple domains - folks with traits like creativity, imagination and analogy making - people like me ;)

Expand full comment

Your optimism about space travel seems a bit naive. I don't think that we can get to the stars in any meaningful sense. SF writer Charles Stross lays out some of the math here:

http://www.antipope.org/cha...

I expect that strong AI could help optimize our economy, but optimization only goes so far. The real impact of AI may not be its ability to improve efficiency, but its ability to subject people to constant scrutiny. The political implications of IT are still not worked out, and potentially pretty worrying.

Expand full comment

"..All this suggests its game over very soon. Far sooner.."

I agree with what you're saying here, mjgeddes, about the finish line possibly coming up soon, and about the importance of Wolfram Alpha or other knowledge engines we might not have heard about yet.

Although affordable voice recognition which recognizes multiple voices out of the box is not widely available yet, the VoiceRec that already exists is pretty powerful. Some training with one's voice is necessary, but after training, you can select from several third party modules that allows one to create user-defined commands. One of these packages has some built-in commands for searching Wolfram Alpha.

There are all sorts of automation tools out there. And a myriad of different ways to search for specific knowledge. And there are specific types of AI modules, some commercial but many open source or free.

It could well be that, yesterday or 3 minutes ago, or 5 minutes from now or next month or next year, someone will put it all together.

It's very easy to imagine some individual or a collaboration playing around with Voice Recognition, Wolfram Alpha, a couple of AI modules, a language parser - and effectively creating an emergent superintelligence. In fact, so many people are working on this, even without knowing it, that it could easily happen in several different areas around the globe.

And the great news is that, because of pride or profit or a spirit of sharing or other reasons, there are more people who are willing to immediately share their findings, than there are people who want to keep it all to themselves. It could all happen in the next minute.

Regarding Wolfram Alpha, one of its stated purposes is to take the place of experts, and make them available to non-experts. We don't know if they will actually accomplish this in a large number of areas anytime soon; but it's possible. Then, we won't have to trust human experts, who come with their own self-interested but understandable baggage.

Good points, mjgeddes. More good reasons to hope.

Expand full comment