

Discover more from Overcoming Bias
The third most prestigious journal in philosophy, Journal of Philosophy, will publish this paper by Eric Mandelbaum:
Everything and More: The Prospects of Whole Brain Emulation. Whole Brain Emulation (WBE) … optimism may be misplaced. … [It] is, at best, no more compelling than any of the other far-flung routes to achieving superintelligence. Similarly skeptical conclusions are found regarding immortality.
Now as the paper never says anything on other routes to superintelligence, both of these claims seem pretty vague. However, reading the paper I think I can identify three key less vague claims. I’ll now argue that two of these claims are true but long known, while the third claim is false.
The paper’s first key claim is that WBE (or “uploads” or “ems”), can’t make us immortal if they are not conscious, and no one knows much about what things are conscious (assuming creatures could have exactly the same behavior yet be conscious or not):
Biological Theory posits that the coding and interchange of information between electrical and chemical formats gives rise to consciousness, and that the specific neural hardware we use is essential to phenomenal consciousness. … The explanatory gap is the thesis that we do not have any idea of how a subjective state (such as seeing red, or hearing middle C on a piano) could be identical to an objective state (such as having a certain pattern of neuronal activation). … it is a theory about our current epistemic position, one which claims that at this moment we have no clue how psychophysical identities could be true. The idea is that we do not yet possess the concepts to bridge this gap (although one day we may).
To which I respond: yes of course, we’ve long known this. The only data we seem to have about consciousness is the fact that many of us feel compelled to believe that some part of us is at that current moment conscious, even as we each feel unsure re the status of everyone and everything else, of ourselves in the past or future, or even of other parts of ourselves at this moment. So even though by assumption a WBE would also feel compelled to believe that part of it was conscious, the rest of us would feel unsure of that. And until we find some other data (and we can’t even imagine what such data could be), this is how the situation must remain. But we’ve long known this.
The paper’s second claim is that creating WBE gets harder the more brain cell details that we need to scan and emulate:
We have good evidence that some sub-connectomic properties do matter for psychology. … increases in (e.g.,) testosterone plainly do affect a wide range of behavior, … the idea that neural properties are not the functional realizers of the mind is, at the very least, very surprising. … subneural functionalist is also rather destructive to the idea that WBE is the best chance to achieve Superintelligence or immortality. … the more low-level the functional properties are, the more we will need to know (and the more information we would need to upload), meaning we would be much further away from achieving uploading. … If more than the connectome matters, if instead lower-level, finer grained details, such as ones that involve neurochemical elements, or other substances that correspond to our “hardware” are germane, then the road to emulation is much less clear.
Every discussion of WBE that I’ve ever seen (and I’ve seen them for over three decades) considers how much within-neuron structure will need to be scanned and emulated, and all such discussions have accepted that more details requires more work and delays the likely arrival date of WBE. Most everyone has also expected that the topology of neuron connections would be insufficient. So I don’t see why that claim is at all “surprising”.
The paper’s third claim is that an unconscious WBE would be useless as a superintelligence:
As human capital is the central driver of economic growth, having large amounts of readily available human-level intelligences will make for enormous technological and societal enhancement. … Say the Biological Theory is only true for phenomenal consciousness. Could the rest of cognition then be captured by the connectome, in which case WBE could still lead to superintelligence? The question turns, in part, on whether there can be intentionality without phenomenology. …. But could there also motivation, or desire, without any phenomenology? … If they have no motivations, then they will not do anything on their own. … If uploads lacked beliefs and desires, then they would just be giant calculators that we neither know how to control nor understand the mechanics of. … if uploads don’t have the normal attitudes, we will have no idea how motivate them to do anything … One may argue that cars and calculators do things without being motivated, but they do so at the behest of intelligent, motivated designers and users.
By definition, a WBE is a device with the same input-output behavior as a source human brain, and thus can be hooked up to artificial eyes, hands etc. to seem to act just as would its source human in the same situation. So it seems that employers could hire such a WBE to do jobs, just as they would have done with the source human. They could, if they wanted, select, train, instruct, incentivize, and monitor such WBE employees in exactly the same ways that they would have done with the source human as an employee. (They also have new options, that I discuss in my book Age of Em.)
This paper claims complains, however, that such WBE employees suffer from a fatal flaw: even though they seem to be able to do their jobs via having beliefs and motivations, they would actually only be using fake-beliefs and fake-motivations. Yet I fail to see how this prevents society from using WBE as effectively as we use other humans today, as a substitute for human capital to drive economic growth. Today, employers never know, nor need to know, if their employees have real or fake beliefs or motivations. Teachers never need to know if their students have real or fake beliefs. Armies never know if their soldiers have real or fake motivations. And so on. By definition WBE would claim to feel, and whether they really feel seems irrelevant to whether they can function in society.
QED.
J. Phil. Critique of Ems
I'm with Andrew Luscombe that the interpretation of Mandelbaum's third argument as presented in this blog post is inaccurate.
Here is Mandelbaum's argument as I understand it (btw I am a former PhD student of his when he taught a class with Ned Block on cognitive penetration):
1. if phenomenal overflow exists, this requires that phenomenal properties are not grounded only in the connectome2. there are good empirical reasons for thinking that phenomenal overflow exists3. so phenomenal properties are not grounded only in the connectome4. motivation-relevant valenced psychological properties (which are functionally-defined attraction and avoidance dispositions, such as are typically used to motivate e.g. human workers) require appropriate phenomenal properties5. so a whole brain emulation of only the connectome would not instantiate motivational-relevant valenced psychological properties
Your post ignores his discussion of phenomenal overflow, which I take to be the central argument in his paper. He absolutely does not claim that the problem with a connectome-only upload is that it would be functionally indistinguishable from a human, though phenomenally different. For instance, when he rhetorically asks "could your connectome duplicate be you even if, for example, it was a sickly sloth while you are a dynamo bursting at the seams with energy and ideas?" Clearly this is a behavioral difference that he is interested in.
Btw I happen to think that this argument likely fails, but you are addressing a straw man. Perhaps instead of patting yourselves on the backs for identifying the supposedly-bad peer review practices of philosophy journals (which are actually quite rigorous), you might instead try a bit harder to apply the principle of charity when reading the paper under consideration.
Ha Ha, pissing phenomenal states ... I don't understand how autocorrect got there. Maybe I originally said missing by accident rather than having?