26 Comments

When Is “Soon”?

http://www.wowwiki.com/Soon

Expand full comment

 Neither Kurzweil nor Moravec are neuroscientists. IIUC, the Blue Brain Project people estimate 1 exaflop will be required to emulate an human brain in real time at intracellular resolution: http://bluebrain.epfl.ch/pa...

Maybe that resolution is excessive for behavioral equivalence, but it provides an order of magnitude.

It's not obvious that Koomey's law can last for 30-40 years, and even if it does, it wouldn't necessarily imply the singularitarian scenarios envisioned by Hanson and Chalmers.

Expand full comment

"Where do you get the estimate of the computational resources of the human brain?"

I made it up. This is the Internet.

Seriously, Ray Kurzweil has estimated the human brain at 20 quadrillion instructions per second. And I *thought* Hans Moravec estimated the human brain at 500 trillion instructions per second. But he may have estimated it at 100 trillion instructions per second.

So....somewhere between 100 trillion instructions per second and 20 quadrillion instructions per second.

"And how long can Koomey's law hold? Maybe it will hold for 22 years, but 100 years seems a wild extrapolation."

Even if it "only" lasts for 30-40 more years, we'll be well into Singularity territory.

Expand full comment

 Where do you get the estimate of the computational resources of the human brain?

And how long can Koomey's law hold? Maybe it will hold for 22 years, but 100 years seems a wild extrapolation.Even today, energy expenditure is becoming a larger and larger operating cost of data centers: http://en.wikipedia.org/wik...

Expand full comment

 >First, I would not classify this type of reasoning as thinking about "existential problems" on a social level. I should have made that clarification. There have been and there will be fringe movements that make this mistake. However, if we take mainstream postitions (and be certain that debate over human - AI relations will be pretty mainstream in 20 years) More fringe movements than I'd care to count thought they'd be mainstream 20 years later.

Expand full comment

"Computers require a lot of energy and other physical resources."

As I pointed out on my blog, the amount of power required for a given number of instructions per second is coming down by factor of 100 every decade. Therefore, supercomputers that now approach the human brain in calculations per second (500 teraflops to 20 petaflops) right now consume on the order of 50,000 times as much power as a human brain.

But if the factor of 100 per decade reduction continues, they will require the same amount of power as a human brain within 22 years.

Expand full comment

What do you consider as the AI "source code" exactly?

You could have a very simple and clean general-purpose algorithm instantiated with an extraordinarily complex model with trillions of parameters. How could you improve something like that any more efficiently than you could improve an emulated brain?

Expand full comment

 Computers require a lot of energy and other physical resources.

The size (GDP) of the economy of a society of digital agents (AIs or brain emulations) would be limited by the amout of available computational power. Once you cap energy and physical resources, the only way to increase computational power is via hardware efficiency improvements. Efficiency improvements of the order of 100% year are unreasonable, and even a more modest 2% year for 100 years might be over-optimistic.

Expand full comment

OK, then why can't future transitions have a similar degree of similarity to past transitions?

Expand full comment

I don't believe that Hanson believes emulations will foom. ("Foom" refers to bootstrapping to super-intelligence). Hand-coded AIs would have much cleaner source code. A hand-coded AGI would have to be built from well-understood principles of general intelligence, whereas the whole point of whole-brain emulation is that deep knowledge of the nature of intelligence is unnecessary. 

Emulations would be monstrously complex, so making any improvements would be exceptionally hard, and the only way the emulation would be able to improve itself would be if it had deep knowledge of how intelligence works.

Expand full comment

That article is irrelevant. I'm not sure exactly what you are imagining, but there is no reason creating a brain emulation or fooming AI would require insane levels of energy. 

Quite the opposite, actually. It's likely that AI would be the more energy efficient option, considering that human bodies need massive amounts of energy just to survive, and thats before you get into the transportation of physical objects (including ourselves). An upload society would likely be far more energy-efficient, so energy  constraints actually place more pressure on a society to adopt uploads or other AI. 

Of course, none of this is to say that AI (through whole-brain emulation or otherwise) is actually technically feasible. (It may or may not be, I'm not an AI expert.

Expand full comment

> However, it still remains an open question whether the process is sustainable and for how long?

I agree these are important questions worth considering. However, I still find this scenario more plausible than the AI singleton one. 

Expand full comment

I think that much of the worst conflict we have already seen is *because* of foreseeable change (think Lebensraum or Arab Israeli conflict). If existing elites could observe thousand year changes in one generation I predict vast and wrenching conflicts. Variance of responses will go up and risks of civilization destroying changes increase.

Expand full comment

 If I understand correctly, both Hanson and Chalmers/Yudkowsky believe in AI singularitarian "foom" scenarios.Their disagreement seems to be about the timescale (doubling times of years vs days) and the origin of the AIs (brain emulation vs fully artificial).

I think that, while Hanson's scenario is less extreme and therefore more probable that Chalmers/Yudkowsky's scenario, it is still improbable.

Expand full comment

"Since an awful lot of processes will speed up over the next few centuries, it is relative rates of speedup that will matter the most."

How do you motivate that, expecially in the light of these type of arguments: http://physics.ucsd.edu/do-...

It seems to me that you, Yudkowsky, Chalmers, Kurzweil et al are all arguing for slightly differerent versions of a generally improbable (or at least not well argued) scenario.

Expand full comment

I care about what happens to most of the universe, which looks like it will be dominated by decisions of future generations. Legal mechanisms for controlling that sort of thing look pretty weak, and don't seem to have played much of a role historically (as you often note). 

But even re killing old folks, while I think respect for law is an important factor, a more natural factor is that if we kill old folks today we will expect future generations to behave similarly (for the obvious reasons, both causal and acausal). The strength of this effect through a transition depends on (several different notions of) the similarity between that transition and anticipated future transitions. 

Expand full comment