46 Comments

Can you explain why AI researcher-years isn't the right measure? If it will take at most 4 centuries for a relatively small number of biological humans to create AGI, how could it take thousands of subjective years when it's possible to make many copies of the best, most productive AI researchers? Plus, wouldn't the availability of brain emulations to study help speed up AI progress?

Expand full comment

While I don't agree AI researcher-years is the relevant measure, I still agree if that AGI "soon" after ems if "soon" means a year or two of objective clock time. But that can be thousands of subjective years to typical ems. So whole civilizations can come and go in that time - LOTS can happen with ems before AGI.

Expand full comment

Assuming that your estimate of up to 4 centuries to AGI is based on not much increase in the number of AI researchers (currently less than 10,000), that gives at most 4 million person-years worth of AI research left. Once em wages fall to near subsistence, this should be easily affordable for any government or large corporation, so we should expect AGI soon after. At the same time or shortly after, AGIs will be more efficient than ems and displace virtually all ems in the labor market. Do you see anything wrong with this reasoning?

Expand full comment

There's already a Long Bet on a computer passing the Turing Test by 2029:

http://longbets.org/1/

If Ray Kurzweil wins that bet, I would consider that to be "...a computer that can carry on a conversation that is virtually indistinguishable from a conversation with a human."

But I've never really liked the Turing Test as described in that bet. Specifically, I don't like the aspect that the computer has to pretend to be a human. A computer has no human experience, but that has nothing to do with intelligence. Blind and deaf people are not any less intelligent simply because they don't see or hear. And a computer isn't less intelligent because it doesn't feel fear, pain, envy, lust, etc. etc. That's computers will be able to perform most human jobs before ems come to pass. (Again, I deny even the value of ems.) For example, as I wrote before, I'm very confident brick-and-mortar shopping (for household goods, groceries, home building supplies, etc. etc.) will be replaced by computers stocking and retrieving from warehouses, and delivering goods to doorsteps, long before ems populate brick-and-mortar stores.

I haven't thought extensively about how to design a suitable test, but a variation on the Turing Test would be that a computer could pretend to be human *or* honestly state it's a computer, and the human could state he/she is human, *or* pretend to be a computer, and "blinded" judges would not be able to tell which situation was occurring. The conversations would be "virtually indistinguishable."

Expand full comment

What do you mean by "virtually."

Expand full comment

How would I motivate an em-of-me to actually want to do those tasks that I personally don't want to do?

I'd be interested in whether the replies to your comment got your point. I think the answer to what you're getting at is straightforward (if unpleasant). The only thing that would motivate an emulation of you to do work you would prefer to avoid is the power differential between you.

Expand full comment

I'll take that bet, depending on what you mean by 'virtually'.

Expand full comment

Even if we knew exactly what it would take to have human-level AI (so we would know how it relates to achievements in specific ordinary AI fields), how would you objectively measure progress towards that state? Hardware progression rate? Resources necessary to improve software? Frequency of revolutionary algorithms being invented? Expected years/resources until completion of goal? How can you tell your measure or weighted combination of measures is more objective than someone else's?

Expand full comment

No, it's not the same. I was talking about average income in time. Subsistence income while employed means starvation once unemployed, even it's just for a short time. If the expected duration of holding a job is the equivalent of 20 years then lifespans would be shorter than they've ever been for humanity, all the while people would know lifespan used to be longer and could be much longer (they can see the rich EMs living for the equivalent of centuries).

But the thing is (as Stephen Diamond points out), many times over in different blog posts you sort of acknowledged life would suck for the masses and that this dystopian EM-world you describe is just the unregulated scenario that you've chosen to describe because it's simple, and more likely than any single regulated scenario, sometimes though you turn around and seem to defend it as a not-so-bad place to live that can even be justified as being better for the masses, on the basis of ethical arguments that you yourself find plausible.

Expand full comment

"You are of course right that in some domains narrow AI is better."

Look at IBM's Watson. Is that "narrow AI"?

It demolished two human Jeopardy champions. Now it assists oncologists. It's learning finance.

There's no way there's more than a decade or two between Watson today and a computer that can carry on a conversation that is virtually indistinguishable from a conversation with a human.

Expand full comment

This analogy makes no sense. IMASBA's point concerned life expectancy, not relative wages.

If you're interest is in forecasting rather than moralizing, Robin, why do you insist so strenuously on prettifying postemulation societies?

Expand full comment

Because you think a near-subsistence level of existence isn't "low quality"?

In other words, you think the mass of humans living in the agricultural era had a good quality of life? (A frankly absurd proposition.) Or does subsistence in the case of emulations somehow NOT mean being pushed to the edge of endurance?

Do you even care to be understood?

[Added.] You characterize our era as "dream time" (and you've counseled us to enjoy it while it lasts). Surely this implies that the quality of life of ems will be dramatically lower than that which we have come to expect.

Expand full comment

""A long way" means human labor will still get most world income,..."

Yes, if computers are paid 40 cents an hour, they don't have much income. All those people (the vast *majority* of workers) laid off from Walmart, McDonald's, UPS, Target, Kroger, and Home Depot, etc. will still make substantially more income than the computers that replace them. After all, the federal minimum wage is $7.25 an hour. That's about 20 times 40 cents per hour. (And most people at those companies make more than minimum wage.) That does *not* mean it's not a world-changing event when the majority of jobs at 6 of the top 10 private employers disappear. I defy you to find any time in the last 100 years when virtually all the jobs at 6 of the 10 largest private employers simply vanished inside 30 years.

"...and at prior trends it would take centuries to change that."

As Ray Kurzweil has often pointed out, "at prior trends" doesn't work if one is thinking linearly, and the growth is exponential.

By my calculations, personal/laptop computers added only 1 human brain equivalent (HBE) to world population in 1995, and will add only 1 million HBEs this year. That's not remarkable when human population is increasing by 70+ million per year. But in 2025 when computers are adding 1 billion HBEs, and in 2035, when computers are adding more than 1 TRILLION HBEs, it will certainly be another matter.

Expand full comment

Naively projecting past rates of progress in specific ordinary AI fields has been a far better way to forecast future rates than asking people to predict general AI progress. For WBE we have concrete trends to project forward. http://www.overcomingbias.c...

Expand full comment

That is the same sense perhaps as saying that the average income of humans today as CEOs is below subsistence, since most people can't be hired as CEOs and so get zero wages for that.

Expand full comment

No, I do NOT say "individual quality of life will be very low"

Expand full comment