Can Human-Like Software Win?

Many, perhaps most, think it obvious that computer-like systems will eventually be more productive than human-like systems in most all jobs. So they focus on how humans might maintain control, even after this transition. But this eventuality is less obvious than it seems, depending on what exactly one means by “human-like” or “computer-like” systems. Let me explain.

Today the software that sits in human brains is stuck in human brain hardware, while the other kinds of software that we write (or train) sit in the artificial hardware that we make. And this artificial hardware has been improving rapidly far more rapidly than has human brain hardware. Partly as a result of this, systems of artificial software and hardware have been improving rapidly compared to human brain systems.

But eventually we will find a way to transfer the software from human brains into artificial hardware. Ems are one way to do this, as a relatively direct port. But other transfer mechanics may be developed.

Once human brain software is in the same sort of artificial computing hardware as all the other software, then the relative productivity of different software categories comes down to a question of quality: which categories of software tend to be more productive on which tasks?

Of course there will many different variations available within each category, to match to different problems. And the overall productivity of each category will depend both on previous efforts to develop and improve software in that category, and also on previous investments in other systems to match and complement that software. For example, familiar artificial software will gain because we have spent longer working to match it to familiar artificial hardware, while human software will gain from being well matched to complex existing social systems, such as language, firms, law, and government.

People give many arguments for why they expect human-like software to mostly lose this future competition, even when it has access to the same hardware. For example, they say that other software could lack human biases and also scale better, have more reliable memory, communicate better over wider scopes, be easier to understand, have easier meta-control and self-modification, and be based more directly on formal abstract theories of learning, decision, computation, and organization.

Now consider two informal polls I recently gave my twitter followers:

Surprisingly, at least to me, the main reason that people expect human-like software to lose is that they mostly expect whole new categories of software to appear, categories quite different from both the software in the human brain and also all the many kinds of software with which we are now familiar. If it comes down to a contest between human-like and familiar software categories, only a quarter of them expect human-like to lose big.

The reason I find this surprising is that all of the reasons that I’ve seen given for why human-like software could be at a disadvantage seem to apply just as well to familiar categories of software. In addition, a new category must start with the disadvantages of having less previous investment in that category and in matching other systems to it. That is, none of these are reasons to expect imagined new categories of software to beat familiar artificial software, and yet people offer them as reasons to think whole new much more powerful categories will appear and win.

I conclude that people don’t mostly use specific reasons to conclude that human-like software will lose, once it can be moved to artificial hardware. Instead they just have a general belief that the space of possible software is huge and contains many new categories to discover. This just seems to be the generic belief that competition and innovation will eventually produce a lot of change. Its not that human-like software has any overall competitive disadvantage compared to concrete known competitors; it is at least as likely to have winning descendants as any such competitors. Its just that our descendants are likely to change a lot as they evolve over time. Which seems to me a very different story than the humans-are-sure-to-lose story we usually hear.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • http://quitelikelyblog.wordpress.com/ Quite Likely

    Seems like what you’re describing is that people think “there are an infinity of different possible mind designs, so what are the odds that the human mind, which wasn’t even designed to operate on computer hardware, will be the most efficient, or even one of the most efficient, competitors?” Which seems pretty reasonable to me.

    • lump1

      I was thinking that in the future, the software that runs most efficiently will be whatever runs native on the hardware, not some sort of abstraction or emulation over it. I know that human programming has been moving away from low-level languages and assembler, but that’s because we’re living in a time of great computing power surplus, when our computing needs are more than satisfied by the existing hardware, and most CPUs are idle most of the time. That’s changing with cloud stuff, coin mining and centralization, and I predict this continues. Computing is now >10% of global energy use, and as this figure approaches 100%, squeezing out even a tiny efficiency improvement from software will be worth millions. That argues for native code and against emulations.

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      I’d say it’s a paradigmatically unreasonable argument in that it focuses on one intuition while completely ignoring the opposite intuition: that an actually existing tool is more often useful than one of an indefinitely large set of nonexistent alternatives. If I want to dig a hole, I will more often use a shovel than any of the indefinitely large set of objects I might use.

      • http://quitelikelyblog.wordpress.com/ Quite Likely

        But are you confident that a shovel is the most efficient possible hole digging device? We have developed machines that can do it quite a bit faster. Maybe there will continue to be a role for human minds after superior artificial ones are developed like there’s still a role for shovels today, but it seems pretty reasonable to think that superior artificial substitutes are on the agenda.

      • Joe

        Depends what you classify as a shovel. Is a trowel a shovel? Is hydraulic excavator? Is a mole?

        I would be a bit surprised to learn that no future digging tools have anything at all in common with shovels.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        I think you missed my point, which had to do with defective reasoning. [My claim was literally true. I would seldom if ever have the occasion to use a tool other than a shovel to dig a hole. That is, I didn’t say “humans” or “we” precisely out of the desire to have a clear logical example.]

      • http://quitelikelyblog.wordpress.com/ Quite Likely

        My assumption was the classic hand tool that we call a shovel, so I’d say trowel maybe, the others definitely not.

        I wouldn’t be surprised if more advanced future artificial minds have plenty of things in common with human minds, but that’s not really saying much since even modern software shares some traits with brains.

  • Robert Koslover

    Personally, I expect/predict extensive cyborg technology (to include not just enhanced mechanical & sensory tools, but also computing & memory) to become widespread before the EM technology revolution that you foresee. If I’m right, then the line between what is a computer and what is a human may be well blurred before anyone actually transfers/uploads a genuine human mind onto an entirely artificial hardware platform.

  • http://invariant.org/ Peter Gerdes

    Where does someone like me sit who thinks that if we start with Ems we will quickly learn to massively cut pieces from them, add knobs/incentives/tools and otherwise quickly move away from software that would seem human to us but still operates using structures borrowed from human brain structures?

    Does Human-like refer to it be like humans in external behavior (we would interact with it and feel comfortable assigning emotions as we do to humans and predict its behaviors using psych) or do how the software is structured and processes data?

    So is software that recognizes human faces and assigns them emotional states by copying several levels of the visual cortexes in humans but lacks any ability to speak or engage in most higher level human cognitive functions?

    • Joe

      I’m curious: do you expect this because you think those mind features you expect to be cut from ems aren’t relevant at all for an AI, or because there are simply better ways to reorganise them? If the former, do you think they are adaptive in animal minds? How about in human minds?

      • Ronfar

        There is a lot of “maintain the body” systems that don’t apply to an AI; there’s no need to eat plants or animals for food, avoid predators, avoid extremes of heat and cold, breathe air, etc. In other words, most things a rat brain does…

      • http://invariant.org/ Peter Gerdes

        I’d like to be clear here this is all conditional on us starting with ems. I personally think this is unlikely.

        The reason I think this is that most features of human brains are evolved to deal with concerns that won’t be relevant to the job we want the em to do. We are evolved to play the status game against other humans really well and otherwise behave in ways that let us mate and advantage our offspring. However, the competition for individual status and features which keep us on the look out for our own main chance aren’t beneficial from the point of view of extracting work from us.

        For instance, we seem to have lots of mechanisms and processes designed to make sure we aren’t being taken of advantage of by someone, we are inclined to foolish violence when angry because that renders it unattractive to try and just take our stuff, we are constantly on the lookout for expressions of sexual interest or disinterest and slights against our status.

      • http://invariant.org/ Peter Gerdes

        Basically compare how an amoeba or other one celled organisms behave and compare them to how the cells in a multi-celluar organism work.

        Many of the things that the lone cell needs to do to correctly deal with its environment are no longer needed, and even harmful, in the context of a larger organism. Rather than forming and acting on its individual judgement in many cases we want the cell to dutifully follow the instructions from higher level control structures. For example, a lone cell would never simply kill itself but animal cells all have several kill switches that help combat infection and cancer.

        Conversely, cells in multi-cellular organisms develop new capabilities to better function together, e.g., think of the action potentials and neurotransmitters for neurons. I expect to see both of these if ems ever happen. A cutting of many of the brain functions used to look out for ourselves, detect danger and position ourselves in the social hierarchy with the addiction of capabilities that let ems engage in a kind of telepathic networking.

      • Joe

        Thanks for the reply (and sorry for the lateness of mine).

        I agree that longer-term, human-like minds might be more closely integrated into a higher level of organisation. But why would this involve cutting features like speech and higher-level cognition? These seem pretty valuable within higher-level organisations such as firms and work groups today.

        Also, do cells in multicellular organisms really have most of the features stripped from single-celled organisms? As I understand it, many of the systems that support multicellular life originally developed for different purposes in unicellular organisms. For example, the mechanisms used to differentiate cell types by activating different sets of genes are also used within single-celled organisms to alter gene expression based on conditions within the cell, e.g. to activate lactase-producing genes in the presence of lactose.

        Finally, since multi-cellular life is more complex than single-celled life, not less, wouldn’t your analogy suggest that we will similarly see this higher level of organisation give rise to more complex and interesting features than those of individual minds, rather than fewer?

      • http://invariant.org/ Peter Gerdes

        I was only suggesting cutting a feature like speech in the specific example of a case where it wouldn’t be particularly needed, e.g., a case where what is needed is simply image categorization but at a level that requries human level judgement and inference ability (so maybe simply flagging pictures offering interesting intelligence information from spy satellites or tourist’s instagram feeds). Many other tasks would require keeping things like speech but would likely find other aspects of our biological predispositions problematic like our constant paranoid concern over being taken advantage of the subsystems that make us deal badly with status insults.

        Basically, I would imagine that most jobs we would have for ems could be best done and most easily done by algorithms that only capture part of the whole human package. The em version of someone on a help line would be stripped of normal concerns about status and insults so one could swear at it all day and it wouldn’t get offended (it would need to understand your emotional valence just not have the normal human emotional response). The em version of a programmer would be stripped of sexual subsystems as well as concerns over personal glory or success.

        It doesn’t really take a very large change in how something behaves for normal people to no longer process it as not really human and I would suspect these modifications would only get more extreme with time. As processing power is presumably the scarce quantity any extraneous mental circuits one can eliminate would be a win.

  • Joe

    Instead they just have a general belief that the space of possible software is huge and contains many new categories to discover.

    Yes, this seems to be it. Though actually, I predict that many of those who expect future AI to look totally unlike any software we’ve seen before, would probably qualify this by saying that the basis of this future AI does probably exist as a very small fraction of human brain content. (And that our minds having this feature, even in a very limited form, is what allowed us to explode to take over the planet as we have.) So future AI will look starkly different from brains mainly due to a very different emphasis of features; they’ll greatly expand and enhance a few parts of our minds while dropping almost everything else.

    (To anyone who did or would vote for Robin’s ‘future AI will look nothing like any software we know’ option, does my description above sound much like your basis for holding this view?)

    By contrast, if you think brain capability mainly comes from the aggregate of very many small features rather than a small number of crucial elements, you’d find it much more plausible for future AIs to look somewhat brain-like, due to brain elements all being relevant to intelligent mind design rather than almost all irrelevant.

  • Harland

    The human brain does not have software. There are no programs, nor programmers. The concept is a misnomer.

  • liron00

    Imagine asking in 1960: “Will the best Go-playing algorithms be like human software or the computer software of today?”

    Or asking in 1900 if flying machines will be more like birds or blimps.

    The answer is that machine software is embodying more and more of the fundamental principles that make various brain systems effective.

  • arch1

    It seems pretty clear that systems *not* constrained to be like current humans could in the long run outcompete systems which *are* so constrained. Ditto if we replace “humans” with “software.” Caveats: 1) This assumes no permanent first-mover advantage, 2) This assumes the goal doesn’t inherently privilege e.g. human-ness (presumably human-like systems will always do best at being human-like:-)

    • http://overcomingbias.com RobinHanson

      “seems pretty clear” isn’t much of an argument.

      • arch1

        A system which must spend extra resources meeting a constraint while also working towards a given goal, will underperform relative to a system which can instead dedicate those resources to meeting the goal.

      • http://overcomingbias.com RobinHanson

        All systems are in practice “constrained” to be somewhat like their predecessors. All systems will have predecessors, so this isn’t an extra cost on human-like systems, compared to others.

    • Joe

      I don’t think your conceptualisation of the question here is right. Yes, if we are comparing systems that are constrained to be humanlike against systems that can be humanlike but can be other things too, it is indeed trivially obvious that the second kind of system will win out.

      But Robin’s question as I interpreted it would consider an instance of the second kind of system that does in fact end up looking humanlike as an example of humanlike systems winning. So in other words: given systems unconstrained in form, how much do they actually end up resembling human minds, vs. today’s handwritten software, vs. something else entirely?

  • Silent Cal

    I worry about confounding from ‘Software we write now’ exculding machine learning (in the poll-takers’ minds). I had to think twice before deciding it probably is included, and even then it influenced my choices on the polls–I was reluctant to vote for W because it still felt like I was saying traditional software would outcompete brains.

    • http://overcomingbias.com RobinHanson

      We’ve had machine learning in software for a half century – it is definitely one of our usual kinds.

  • Marius Catalin

    In my opinion we’ve already started to write software that is very similar, if not the same, to the one from the our brains and it’s better then the artificial one too. It is suffering form the same limitations as well