72 Comments

The way I see it, all scientific discussion aside, the problem is robots don't buy stuff. They may need some software upgrades which corporations could provide and so a few people developing software for robots will still have jobs (unless you are envisioning a self-repairing and self-improving robot) but replacing human workers with robots is a losing deal in the long run. We already have robots building cars, with probably a tenth of the workforce it took to build them in the past. So all those former auto workers who now have low paying jobs greeting people at WalMart are barely making ends meet. So I ask you, when the real jobs that pay a living wage have been replaced by robots, who will buy the stuff they make? You can already see the result of the combination of computerization and off-shoring of jobs in the last two decades. I could see a positive of bringing back some jobs to the U.S. by adding in robots to other kinds of assembly line work but in the end, we have to find ways to provide good jobs to men and women with families to raise or the nation as a whole will continue in what is now a continuous decline in the standard of living for millions of people. And, surprise, surprise, profits are down in all sectors that rely on consumers. Can anyone tell me how you get around this? I keep feeling that economists just ignore this reality. Unless the answer is the magical economy of the "developing world." Yes, China and India have enormous populations and as they grow in wealth they will become consumers but they will also become big manufacturers. This is the road to ruin if this is the plan.

Expand full comment

Google gives good answers to plain-language questions.

It doesn't. It just finds web pages that match the words in your query (accounting for synonyms and spelling errors) and presents them ranked according to an importance metric that happens to be sensible to us.

It is one fundamentally innovative idea (the PageRank metric) some smaller ideas and lots and lots of excellent engineering and fine tuning.

Expand full comment

Our economic growth rates are limited by the rate at which we can grow labor.

How does that claim fit with the observation that all developed countries have significant unemployment?

Expand full comment

Mmm. I think I wasn't specific enough. Didn't mean to include the Medo-Amerindians,

Expand full comment

No humans need to be killed to preserve chimpanzee habitat, humans would just have to forego a tiny fraction of their luxuries (or learn to use protection). That it's wrong to oppress a minority to give a majority more luxuries (as opposed to spreading the pain/work) was established when slavery was outlawed (this applies to AIs and chimpanzees, maybe not to slugs, but I never mentioned those).

Expand full comment

Here's a question: who's to say that's actually the right thing? Are you implying that preserving chimpanzee habitat (or slug habitat, or whatever) trumps human prosperity? This is an issue Nicholas Agar brings up. Under many currently fashionable theories of morality, AIs *might be entirely justified* in killing/oppressing humans.

Expand full comment

I wanted to add today that I may not have considered enough Mr. Hanson's interesting concept of a new growth mode, which seems plausible on its face. But even if that happens, I would strongly expect some sort of transitional period where the economic doubling time is intermediate. The idea of AI suddenly "waking up" without warning still seems groundless.

Expand full comment

I take your point but still find it intuitively unlikely that a full simulation could easily be "dumb" in the sense of not requiring--or at least, producing, before you reached the point of a full simulation--an abstract understanding of mental modules' functions.

Expand full comment

Possible, but unlikely. The amount of storage and processing power required would be stupendously large, especially when remembering how efficient even the tiny brains of fish and insects are at navigating and learning.

Expand full comment

Wouldn't copying a human brain almost inevitably give us enough information to build A.I.? It seems implausible that we could run a human intelligence on a computer, and yet still have no idea how to improve on it.

Expand full comment

What evidence do we have that a child's mind uses a limited set of algorithms? Maybe the human brain just runs an even bigger, messier patchwork, with more hardware behind it.

Expand full comment

"Maybe it seems slow right now because we're just mopping up the last details."

I can assure you that's not the case. Current solutions to problems are patchworks of different algorithms, we have nothing resembling an child's mind: a self aware system that can learn pretty much anything using a limited set of algorithms.

Expand full comment

With enough autonomous robots, those could theoretically also be non-scarce.

Expand full comment

I would assume an em civilisation would colonise other planets.

Then again, it's pretty mind-boggling we haven't done that already, hard though it is for meatbags.

Expand full comment

"You can't just read out the software of the mind, there is no hard disk equivalent storing all the algorithms and connections between them. The algorithms more or less depend on the hardware configuration, like an old fashion circuit board that doesn't run software but simply carries out its function because all the physical electronics and wires are set up in the right way."

That's the idea - learn enough neuroscience to predict the behaviour of arbitrary neural structures, then scan a biological brain and simulate it. Less efficient than abstracting the algorithms, but probably easier.

Expand full comment

"I do not think that AI being developed through emulating human brains is very likely. That is like developing flight through emulating birds."

An "em" isn't an AI programmed based on our understanding of neuroscience rather than formal logic. It's a simulation of an existing brain that we scanned.

Expand full comment