9 Comments

I'm with Andrew Luscombe that the interpretation of Mandelbaum's third argument as presented in this blog post is inaccurate.

Here is Mandelbaum's argument as I understand it (btw I am a former PhD student of his when he taught a class with Ned Block on cognitive penetration):

1. if phenomenal overflow exists, this requires that phenomenal properties are not grounded only in the connectome2. there are good empirical reasons for thinking that phenomenal overflow exists3. so phenomenal properties are not grounded only in the connectome4. motivation-relevant valenced psychological properties (which are functionally-defined attraction and avoidance dispositions, such as are typically used to motivate e.g. human workers) require appropriate phenomenal properties5. so a whole brain emulation of only the connectome would not instantiate motivational-relevant valenced psychological properties

Your post ignores his discussion of phenomenal overflow, which I take to be the central argument in his paper. He absolutely does not claim that the problem with a connectome-only upload is that it would be functionally indistinguishable from a human, though phenomenally different. For instance, when he rhetorically asks "could your connectome duplicate be you even if, for example, it was a sickly sloth while you are a dynamo bursting at the seams with energy and ideas?" Clearly this is a behavioral difference that he is interested in.

Btw I happen to think that this argument likely fails, but you are addressing a straw man. Perhaps instead of patting yourselves on the backs for identifying the supposedly-bad peer review practices of philosophy journals (which are actually quite rigorous), you might instead try a bit harder to apply the principle of charity when reading the paper under consideration.

Expand full comment

Ha Ha, pissing phenomenal states ... I don't understand how autocorrect got there. Maybe I originally said missing by accident rather than having?

Expand full comment

Why does WBE imply superintelligence?

Expand full comment

It isn't, at least not a proper one, it might be considered partial or something, but I read them as saying it isn't. Where do they say it is?

It's like a car without steering isn't strictly a car because all cars need steering. But people will still call it a car (perhaps adding a qualifier like a faulty car, or a steeringless car) because there is no other convienient term for the thing.

Expand full comment

Agreed in all respects.I think we have to conclude, if we haven't already, that peer review in philosophy is not tracking the logical content of claims. And seek some treatments for our Gelman Amnesia

Expand full comment

If it doesn't have the same behavior in the same situations, how is it an emulation?

Expand full comment

Maybe their third point is that WBEs might not be possible because the motivations, beliefs, desires, and/or attitudes might be missing despite all the other information processing abilities being present.

I don't see that they said WBEs will behave exactly like humans, but be useless somehow despite behaving the same.

Expand full comment

“Today, employers never know, nor need to know, if their employees have real or fake beliefs or motivations. Teachers never need to know if their students have real or fake beliefs. Armies never know if their soldiers have real or fake motivations. And so on.” I get your point, which is effective against Mandelbaum; but to me it suggests (á la ordinary-language philosophy) that both of you are misconceiving consciousness. It matters to us whether, e.g., other people are conscious during our social interactions, but we easily determine that, for the most part, they *are*. Philosophers can construct amusing skeptical arguments about “other minds” (as about “the external world”), but by ordinary standards one is rightly confident of being often in the presence of others who are conscious (as of being surrounded by physical objects). The hard thing is to explain what consciousness is, that should make it important to us: the concept seems to elude easy reduction to simpler concepts.

Expand full comment

How the fuck did that get through peer review? Unless I'm missing something the final paragraph you quote is fallaciously equating pissing phenomenal states with whether an agent acts like they have such states.

I'd literally give an undergrade who made this mistake in their first philosophy class a bad grade for that kind of confusion. Maybe decades ago it might have been understandable but anyone doing philosophy on phenomenal consciousness should have read Chalmers work on the hard problem.

Sure, maybe the em doesn't really have intentionality if it lacks phenomenal consciousness but, on the assumption the bio theory is *only* true for phenomenonal consciousness (so all the non-bio implementation lacks is the phenomenal aspect), the em will obviously behave as if it had intentionality as far as other agents are concerned.

I'm a critic of your em theory but this is a criticism peer review should have caught (or I've really misunderstood them in a very basic way).

Expand full comment