34 Comments

Why wouldn't an em worker care about these? With regards to socialabilities, these seem pretty essential in the workplace. Human workerstoday need to interact & manage relationships with colleagues,bosses, customers, partners, clients, employees, the general public,etc. Why wouldn't ems too?Bank clarks maintained social relationships, ATM's don't. For many engineering jobs, you want a mind that can take a technical specification containing performance, efficiency, weight ect, and design an component to those specs. It doesn't seem to me like the social relationships humans make actually helps with this.

Note that when we link together our existing software, we try to do so"at arm's length", through APIs allowing specific defined operations,rather than by allowing every part of each program to interact directlywith every part of the other program.Because if they interacted directly, they would be part of the same program. There is no reason that a single "program" couldn't contain multiple human brain emulations. (Or that an API couldn't contain neural signals)

Expand full comment

But if you know which high level shortcuts to take, you must have at least a moderately good understanding of the principles the human mind works on. This rules out the pure brute copying scenario. And you can use that understanding in your AI designs.

If you can simulate an adult, you can probably simulate from embryo. Then you can put the embryo in a virtual world maximally conducive to learning what you want it to learn. How could a human with their normal, take time off to have fun upbringing, compete with an em that has been trained from birth, with at least a few neurobiological tweaks. (Connecting the pleasure and pain pathways up to the work, increase attention span ect.)

Expand full comment

So if I understand the thesis, the argument is that natural human beings will be displaced from the economy, not by networked computers running code which is at best vaguely biologically inspired - something that we are already surrounded by - but rather by computers running detailed emulations of whole adult human brains - something which doesn't remotely exist. Forgive me if I think that the first category of digital intelligence is going to stay ahead in that race.

Expand full comment

I agree harnessing is a good word for this. The reason I think it's relevant is it seems the question of whether brain emulation might happen before AI written from scratch comes down to whether it's possible to use emulation to similarly 'harness' brains.

That is, it seems inevitable that eventually we will be able to emulate brains, even if an absolutely enormous amount of detail must be understood and built into the design. But if it's possible to use the scan to capture most of the relevant design detail in the specific arrangement of a very large number of part instances, but with only a relatively small number of different part types (like with a computer program or a circuit diagram), then the 'harnessing' description can reasonably apply here too, and it then seems plausible that brain emulation could happen sooner.

Expand full comment

"Harnessing", as Mark Changzi calls it. Humans harness machines and machines harness humans. It doesn't really bear on the issue of brain emulation vs ordinary machine intelligence, though.

Expand full comment

Planes are not simulated birds, submarines are not simulated fish.

On the other hand, carrier pigeons predate planes by several millennia. Same story with horses vs cars, etc.

Expand full comment

I think the brain-in-a-box-in-a-basement crowd are there to promote scary stories to simulate fundraising. Their "crazy" views are not your real competition. Instead, your imagined world competes with regular progress in machine intelligence. Planes are not simulated birds, submarines are not simulated fish. It is wishful thinking to imagine that our brains are so special and valuable that their properties can't fairly-easily be automated. Probably, intelligent machines - and not precisely simulated brains - will be doing most of the intellectual work in the future after meat brains become functionally redundant. Precisely-simulated brains seem likely to arrive late to the party when they will face tough competition and consequently have low economic significance.

Expand full comment

This is a reasonable approach if we believe that the emergence of EMS is associated with a large speed up of economic growth. I think that a huge amount of economic growth will happen before EMS, and it will occur through AI and perhaps human simulations not based on uploads (i.e. generic human reconstructed from multiple neuroscience programs), so the time frame for 10 post-EMS doublings could be long. How about 10 doublings or 10 years, whichever is shorter?

Also, we need to specify what happens if the bet's loser is in cryonic suspension - does he have to pay on revival? What if we wake up unexpectedly poorer, and not able to come up with the bet amount + inflation adjustment?

Expand full comment

In the post I proposed setting a deadline as so many (<10) econ doublings after the cost to rent an em is comparable to median human wages. Does that not work for you?

Expand full comment

Why not take me at my word re the theoretical arguments I outlined?

Expand full comment

I would love to know what happened in the last three years that took you from 5% unconditional to 80%.

Expand full comment

I'll take something in the spirit of your bet. Let's say "The total amount of salaries earned by EMS (defined as uploads of specific, individual humans who retain their memories and personalities) will always remain at less than 10% of GDP". We would have to decide on the time for settling the bet and provisions related to our cryonic status, too.

Expand full comment

Perhaps I should have been clear that I meant lobotomy figuratively not as in I scrambling eliminating the frontal cortex which is surely pretty important.

Yeah, I was also interpreting it metaphorically, roughly as "cutting away many parts of an em's mind, resulting in a creature lacking many or most recognisable human behaviours & abilities".

Lots of our behavior is primarily useful for maintaining our social relationships, forming alliances and navigating the world. An em worker need not care about these and, indeed, would probably be more efficient without it.

Why wouldn't an em worker care about these? With regards to social abilities, these seem pretty essential in the workplace. Human workers today need to interact & manage relationships with colleagues, bosses, customers, partners, clients, employees, the general public, etc. Why wouldn't ems too?

With regards to modelling physical space, obviously this would be necessary for any em with a job in the physical world, but it also seems quite plausible that the mental mechanisms that support spatial-related abilities are used more broadly in solving many different problems, and so would be used even by ems that worked solely in virtual reality. For example, there are theories that we perform logical reasoning by imagining a mental model of the scenario in question. Mental modelling also seems key to envisioning cause-and-effect, communicating ideas, predicting others' behaviour, etc. Also, the fact that we still largely work in shared offices, have face-to-face meetings, etc., suggests there are benefits to such in-person interactions that non-spatial alternatives can't sufficiently match.

Moreover, you'd almost certainly pretty quickly start linking em brains together at a very fundamental level (letting neurons from one em talk to those in another) producing larger networks that are neither discrete individuals nor one person with a really big brain.

Why should we expect this? I think we know of many advantages of not linking things together when there's no need to - it means you can more easily swap parts out and replace them with alternatives without affecting other parts. Note that when we link together our existing software, we try to do so "at arm's length", through APIs allowing specific defined operations, rather than by allowing every part of each program to interact directly with every part of the other program. Doing the latter creates a tangled mess, near-impossible to understand or modify. Why would it be different when linking brains?

I mean you can almost certainly eliminate many desires like love, hate, sexual urges (the visual system if it's not a visual task you need them to do etc..) and simply replace them with a direct virtual dopamine reward for getting work done.

Do we not already get a real dopamine reward when we get work done? (Or when we succeed in other work-related ways, such as running a successful meeting, receiving praise from the boss, etc.)

In a complex-brains scenario, I don't think it makes sense to see mind features like love and hate as simply desires, but rather as mechanisms, which we can use in many different contexts. For example, whatever mechanism enables us to love our spouse is probably also what enables us to feel similar attachments to our firm, our country, our colleagues, etc., and means the same relationship management concepts can be used in our relations with these entities too, concepts like obligations, betrayal, etc.

Expand full comment

Ohh yes, I wasn't suggesting it was clear which would win. Only that there is a decent argument to be made that other forms of AI will succeed first.

Expand full comment

On a related note, two projects have been trying to simulate the brain of the roundworm C. elegans, Si Elegans (seemingly defunct) and Openworm (seemingly moribund.)

The peculiar thing about Openworm is that development seems to have stalled out just as it was on the verge of a complete simulation:

https://github.com/openworm...

As far as I can tell the above issue (touch feedback) is all that stands in the way of the first complete whole brain emulation in history. It looks like anyone moderately familiar with C/C++/Python and neurobiology could simply jump in and complete the model. The final commenter on the issue tracker seems to have attempted precisely that before abandoning the effort in February after getting no useful feedback from the development team.

Is anyone reading this able and willing to attempt to close this issue and perhaps win the distinction of being the one to finally take WBE out of the realm of speculation and science fiction?

Expand full comment

I agree that lobotomized human minds are not likely to be that useful in most jobs that human minds do best.

Expand full comment