Princeton neuroscience professor Michael Graziano has a new book out today, Rethinking Consciousness: A Scientific Theory of Subjective Experience, which is mostly on his account of consciousness. On Sunday he published a
That's because increasingly complex code is increasingly hard for the coder to understand (& thus to modify without error). But em (and human) brains don't need to be understood in order to self-modify.
Our programs consists of code and data. The data may include e.g machine-learned models such as the weights and biases of neural nets. Data is not subject to code rot and a program with such data can continue to adapt to changing circumstances forever.
So is the brain more similar to our programs data or code? If it's more like code, and learning depends on pieces of the brain observing, abstracting, and then acting to modify other parts, then I agree it might rot (or it might not). But this seems to me a far-fetched model of learning in the brain.
It there is rot then we should be able to observe increased cross-talk between brain modules as the brain is occupied with specific tasks in older subjects. I haven't seen such a study but it could be done.
Code today rots because humans make changes to it without investing sufficiently in maintaining its organization as the function of each part drifts. It is not inevitable, just a common incentives problem.
Code today rots quite robustly even without any degrading of the physical memory and processing devices.
I should have said "degenerative and disease" processes, not "physical" processes.
But isn't understanding (of the brain's structure) irrelevant to the difficulty of em (and human) learning? And as for the difficulty of brain modification per se with increasing age, how do you know this isn't caused in humans only by physical processes which won't be emulated in em brains?
Rotting code becomes arguably harder to understand and modify by ANYTHING, not just humans. And humans learn by self-modification.
I've read your (great!) book and your blog posts but still haven't seen a convincing argument for point 4.
The analogy to code rot doesn't work very well because that describes code becoming hard to understand and modify for humans, but the "code" of the human mind is not understandable by humans, and modifiability is separate from flexibility of behavior and similar.
This all seems pretty clear; the only thing I can't fathom is... why was your book not cited? I mean, really, WTF?
Of course they could choose to, if they owned sufficient resources to support production without ems. But I predict few will choose that.
Ems will be very stingy regarding who is allowed to get and fun copies of them. In any case, surely this is a minor factor among the many social forces to create or prevent ems.
Re your point 2, if humans have nothing to offer uploads, why can't humans trade with (and be employed by) each other, rather than trading with the uploads?
Among the incentives not to upload, particularly if the uploads are widely believed to be high fidelity, would be the possibility of using the emulation for character incrimination. That is, if Bob's wife suspects that he has been cheating on her, and she had access to his em, she could conceivably pay a company to place the em into a scenario designed to indirectly test his marital fidelity. If the em chooses to cheat, that presumably says a lot about Bob's willingness to cheat, even if it can't reveal his actual behavior.It gets darker when you consider how states could use similar techniques in predicting, preventing or solving crimes (Minority Report?).
7. "...of use a rolling..." -> "...or use a rolling..."