45 Comments

Regarding the economic question, we have some experience with this, but I'm not sure what it tells us. ~1900 when machines took over from human muscles did labor costs go down? In the 1980s as computers took over for clerical work did clerical pay go down? It doesn't seem in either case that wages dropped particularly. Meanwhile, there is presumably a long term trend that all the production gets spread between all the humans. If production goes up because of robots, then the increased production has to be split between1) Labor2) Capital3) ? Robots ?

I suspect it is politically unstable for the excess production from robots to go to capital without some going to labor. Capital formation and posessions is a function of the laws, and the laws can change. Obviously our legals sytems have always allowed some share of production to capital, enough to allow capital to be accumulated, but the broad sweep is that capital tends not to be accumulated by families for more than a few generations. If Robots are THAT productive and are still not people, I believe the political solution will be to allocate the excess production between their owners and the rest of the population. Heck maybe they will be publicly owned, like the beaches in california or the monopoly on lotteries or the road system.

BUt it doesn't seem the public record is that productive innovations that displace labor permanently reduce the wages to labor. Rather the opposite, the wages to labor have continued to rise even as productive innovations have in a micro-sense competed labor away.

Expand full comment

Russ Roberts positions are constrained by his autistic support of unilateral free trade and open borders. In the absence of actual evidence, he has written novels and fables to support those positions, which signal his status and loyalty to the old high priesthood of academic economics, but also, since a writer's books are his babies, make it very difficult for him to support any argument that would betray them.

Expand full comment

Tyler has made clear in previous arguments with Hanson on issues such as cryonics that he knows diddly squat about science and technology. Tyler may be a fine economist. But he is completely ignorant about science and technology.

Guys,

Penrose's arguments about quantum consciousness are a complete red herring with regards to the creation of A.I. All Penrose is saying is that the human brain is a room-temperature quantum computer. His arguments are nothing more than a proof of principle for the creation of room-temperature quantum computers.

Of course we can still make real A.I. if Penrose is correct. The only difference is that they would be based on quantum computers rather than digital computers. The argument that Penrose's theory precludes the creation of real A.I., even based on quantum computers, is complete hogwash.

Expand full comment

Given that there is currently no AGI, and no EMs, nor any research program extant showing promise of leading to either one it is interesting to read all the frantic emotionalism on this topic by commentators here and on LW. "won’t you feel silly if you and your grandchildren are ground up into robotic fuel paste in 30 years" definitely deserves some sort of prize for true-believer thinking, right up there with "rapture" fairy-tale movies and the like.

A nice demonstration that, in the end, religious modes of thinking are alive and well even among those who purport to have abandoned them.

Expand full comment

Essentially, Penrose claimed that mathematicians can do what no machine can do - by using their mysterious and infallible mathematical intuition. Of course, no evidence that humans could reliably perform this feat either was provided. His whole argument was without merit: "Penrose seems to make a fairly elementary error right at the beginning" - Dennett.

Expand full comment

Let's ignore Penrose's proposed physics solutions to consciousness (Qantum gravity, non-computability) for which there's no evidence. I think we can rightly dismiss these theories. Instead lets focus on the part I think he got right, the Godel puzzle:

He's saying lets take the mathematical community as a whole (which includes every single intelligent entity working on mathematics). Treat all the individual algorithms of the brains of those searching for mathematical truth as a single combined algorithm. Now take the Godel number of that single gigantic algorithm. That's a mechanical procedure. But Penrose is pointing out that there's no non-sentient (knowable mechnical) procedure that can possibly understand why the Godel statement of the algorithm is true. I think he's absolutely right about this.

The clear conclusion is that Bayesian inference (non-sentient probability shuffling) is incapable of assessing the truth or falsity of mathematical axioms, and therefore cannot be fully capturing the intuitive components of general intelligence. The only way to escape this conclusion is to claim that mathematical intuition is random (the axioms are just selected at random), but that is nonsense.

To sum up: sentient reasoning systems must be more powerful than any Bayesian reasoning system.

Expand full comment

The two biggest problems I see on Robin's end are the ideas that uploads will get anywhere in the face of engineered machine intelligence - and that intelligences will stay small, and be poor. IMHO, it is much more realistic to think about engineered intelligences - and about planetary-scale super-rich creatures.

Expand full comment

If it wasn't obvious, I'm not arguing for Penrose's specific approach which I agree is almost certainly wrong.

The general idea that the physics of consciousness is weird and not yet understood, though, is worth taking seriously.

Expand full comment

Not if life is rare - and we are locally first.

Expand full comment

Vassar. Sorry!

Expand full comment

@Michael Vasser

You don’t need to prove it wrong to say that it’s not motivated by any significant evidence and that there are strong reasons for expecting people to be biased towards making such claims regardless of their truth.

It seems pretty clear that there's something weird about consciousness. If it's not physics, it's ghosts and goblins, which is even less appealing.

Strong-AI arguments don't explain well why my consciousness appears where it does and has the borders it does in a tick-tock universe.

Penrose is not a crank, but I certainly yield that there's very little evidence for any particular physical explanation of consciousness at this point. Something is going on, though, and we don't know what it is. It doesn't seem like a big leap to say we don't know whether or not that knowledge might impose limits.

Expand full comment

Do you really think [Penrose] would make argument based on such a stupid misunderstanding as one you mention?

That book *was* heavily based on serious misunderstandings.

Expand full comment

One factor has diminishing returns when it is used along with something else that isn't growing as quickly. A second whole earth orbiting the sun wouldn't be the poorer for this earth. For that matter doubling the population of earth at this point wouldn't necessarily make us poorer (the gain in public good of knowledge might outweight less space and capital per person). If you want to persuade Roberts you could describe the complementary factor of production that we will have less of per capita in the future.

Expand full comment

This is exactly what I thought.

I don't know if I'll be able to listen to the whole podcast, but, Robin, did you ask him to describe what he was imagining? And why you need to be religious to be able to imagine it?

Expand full comment

I think your discussion about reductionism could have perhaps been more generous to the other side. There is still serious debate in philosophical circles about the possibility of dualism. Current thinkers on the issue, for instance, should at least be aware of Chalmer's zombie argument for dualism - even if they don't agree with it:

http://en.wikipedia.org/wik...

In general the current discussion in philosophy of mind is very rich and I think non-philosophers would benefit from some dabbling. There are lots of different theories that involve distinctions of greater subtlety that might allow you guys to occupy common ground (non-reductive physicalism, property-dualism). In fact, when listening to your discussion I wasn't entirely sure if you were arguing for a monist, physicalistic reductionism. If that's your view, then you'll find there are lots and lots of philosophers that will disagree with you.

But often you defer to a functionalist view - which says that mental states are multiply realisable. This is distinct from the ontological claim about what stuff exists - and is presumably compatible with modified versions of both reductionism and dualism.

In anycase - the argument overall, as I understood it, did not depend on anything other than some kind of functional view. I'd go somewhat further though and suggest that it's not entirely clear that we'd have to micro-analyse the brain and build something roughly similar. However we do it - a functional view suggests that all we have to do is get the outputs, relative to inputs roughly right. Specialised research is being carried out in just about every field imaginable to automate various isolated processes. To get the kind of machine we want may simply involve a kind of unification of all these specialised domains. The end result may structurally be quite unlike a human brain. As long as it outputs like one, however, is all we really need.

Expand full comment