25 Comments

I was only suggesting cutting a feature like speech in the specific example of a case where it wouldn't be particularly needed, e.g., a case where what is needed is simply image categorization but at a level that requries human level judgement and inference ability (so maybe simply flagging pictures offering interesting intelligence information from spy satellites or tourist's instagram feeds). Many other tasks would require keeping things like speech but would likely find other aspects of our biological predispositions problematic like our constant paranoid concern over being taken advantage of the subsystems that make us deal badly with status insults.

Basically, I would imagine that most jobs we would have for ems could be best done and most easily done by algorithms that only capture part of the whole human package. The em version of someone on a help line would be stripped of normal concerns about status and insults so one could swear at it all day and it wouldn't get offended (it would need to understand your emotional valence just not have the normal human emotional response). The em version of a programmer would be stripped of sexual subsystems as well as concerns over personal glory or success.

It doesn't really take a very large change in how something behaves for normal people to no longer process it as not really human and I would suspect these modifications would only get more extreme with time. As processing power is presumably the scarce quantity any extraneous mental circuits one can eliminate would be a win.

Expand full comment

Thanks for the reply (and sorry for the lateness of mine).

I agree that longer-term, human-like minds might be more closely integrated into a higher level of organisation. But why would this involve cutting features like speech and higher-level cognition? These seem pretty valuable within higher-level organisations such as firms and work groups today.

Also, do cells in multicellular organisms really have most of the features stripped from single-celled organisms? As I understand it, many of the systems that support multicellular life originally developed for different purposes in unicellular organisms. For example, the mechanisms used to differentiate cell types by activating different sets of genes are also used within single-celled organisms to alter gene expression based on conditions within the cell, e.g. to activate lactase-producing genes in the presence of lactose.

Finally, since multi-cellular life is more complex than single-celled life, not less, wouldn't your analogy suggest that we will similarly see this higher level of organisation give rise to more complex and interesting features than those of individual minds, rather than fewer?

Expand full comment

My assumption was the classic hand tool that we call a shovel, so I'd say trowel maybe, the others definitely not.

I wouldn't be surprised if more advanced future artificial minds have plenty of things in common with human minds, but that's not really saying much since even modern software shares some traits with brains.

Expand full comment

All systems are in practice "constrained" to be somewhat like their predecessors. All systems will have predecessors, so this isn't an extra cost on human-like systems, compared to others.

Expand full comment

Basically compare how an amoeba or other one celled organisms behave and compare them to how the cells in a multi-celluar organism work.

Many of the things that the lone cell needs to do to correctly deal with its environment are no longer needed, and even harmful, in the context of a larger organism. Rather than forming and acting on its individual judgement in many cases we want the cell to dutifully follow the instructions from higher level control structures. For example, a lone cell would never simply kill itself but animal cells all have several kill switches that help combat infection and cancer.

Conversely, cells in multi-cellular organisms develop new capabilities to better function together, e.g., think of the action potentials and neurotransmitters for neurons. I expect to see both of these if ems ever happen. A cutting of many of the brain functions used to look out for ourselves, detect danger and position ourselves in the social hierarchy with the addiction of capabilities that let ems engage in a kind of telepathic networking.

Expand full comment

I'd like to be clear here this is all conditional on us starting with ems. I personally think this is unlikely.

The reason I think this is that most features of human brains are evolved to deal with concerns that won't be relevant to the job we want the em to do. We are evolved to play the status game against other humans really well and otherwise behave in ways that let us mate and advantage our offspring. However, the competition for individual status and features which keep us on the look out for our own main chance aren't beneficial from the point of view of extracting work from us.

For instance, we seem to have lots of mechanisms and processes designed to make sure we aren't being taken of advantage of by someone, we are inclined to foolish violence when angry because that renders it unattractive to try and just take our stuff, we are constantly on the lookout for expressions of sexual interest or disinterest and slights against our status.

Expand full comment

We've had machine learning in software for a half century - it is definitely one of our usual kinds.

Expand full comment

I worry about confounding from 'Software we write now' exculding machine learning (in the poll-takers' minds). I had to think twice before deciding it probably is included, and even then it influenced my choices on the polls--I was reluctant to vote for W because it still felt like I was saying traditional software would outcompete brains.

Expand full comment

I think you missed my point, which had to do with defective reasoning. [My claim was literally true. I would seldom if ever have the occasion to use a tool other than a shovel to dig a hole. That is, I didn't say "humans" or "we" precisely out of the desire to have a clear logical example.]

Expand full comment

Depends what you classify as a shovel. Is a trowel a shovel? Is a hydraulic excavator? Is a mole?

I would be a bit surprised to learn that no future digging tools have anything at all in common with shovels.

Expand full comment

I don't think your conceptualisation of the question here is right. Yes, if we are comparing systems that are constrained to be humanlike against systems that can be humanlike but can be other things too, it is indeed trivially obvious that the second kind of system will win out.

But Robin's question as I interpreted it would consider an instance of the second kind of system that does in fact end up looking humanlike as an example of humanlike systems winning. So in other words: given systems unconstrained in form, how much do they actually end up resembling human minds, vs. today's handwritten software, vs. something else entirely?

Expand full comment

But are you confident that a shovel is the most efficient possible hole digging device? We have developed machines that can do it quite a bit faster. Maybe there will continue to be a role for human minds after superior artificial ones are developed like there's still a role for shovels today, but it seems pretty reasonable to think that superior artificial substitutes are on the agenda.

Expand full comment

A system which must spend extra resources meeting a constraint while also working towards a given goal, will underperform relative to a system which can instead dedicate those resources to meeting the goal.

Expand full comment

"seems pretty clear" isn't much of an argument.

Expand full comment

It seems pretty clear that systems *not* constrained to be like current humans could in the long run outcompete systems which *are* so constrained. Ditto if we replace "humans" with "software." Caveats: 1) This assumes no permanent first-mover advantage, 2) This assumes the goal doesn't inherently privilege e.g. human-ness (presumably human-like systems will always do best at being human-like:-)

Expand full comment

I'd say it's a paradigmatically unreasonable argument in that it focuses on one intuition while completely ignoring the opposite intuition: that an actually existing tool is more often useful than one of an indefinitely large set of nonexistent alternatives. If I want to dig a hole, I will more often use a shovel than any of the indefinitely large set of objects I might use.

Expand full comment