No Third AI Way

A few days ago in the Post:

Bryan Johnson .. wants to .. find a way to supercharge the human brain so that we can keep up with the machines. .. His science-fiction-meets-science start-up, Kernel, is building a tiny chip that can be implanted in the brain. .. Top neuroscientists who are building the chip .. hope that in the longer term, it will be able to boost intelligence, memory and other cognitive tasks. .. In an age of AI, he insists that boosting the capacity of our brains is itself an urgent public concern.

In a video discussion between James Hughes and I just posted today, Hughes said:

One of the reasons why I’m skeptical about the [em] scenario that you’ve outlined, is that I see a scenario where brains extending themselves though AI and computing tools basically slaved to the core personal identity of meat brains is a more likely scenario than one where we happily acknowledge the rights and autonomy of virtual persons. .. We need to have the kind of AI in our brain which is not just humans 1.0 that get shuffled off to the farm while the actual virtual workers do all the work, as you have imagined.

Many hope for a “third way” alternative to both ems and more standard AI software taking all the jobs. They hope that instead “we” can keep our jobs via new chips “in” or closely integrated with our brain. This seems to me mostly a false hope.

Yes of course if we have a strong enough global political coordination we could stake out a set of officially human jobs and forbid machines from doing them, no matter how much better machines might be at them. But if we don’t have such strong coordination, then the key question is whether there is an important set of jobs or tasks where ordinary human brains are more productive than artificial hardware. Having that hardware be located in server racks in distant data centers, versus in chips implanted in human brains, seems mostly irrelevant to this.

If artificial hardware can be similarly effective at such tasks, then it can have enormous economic advantages relative to human brains. Even today, the quantity of artificial hardware can be increased very rapidly in factories. And eventually, artificial hardware can be run at much faster speeds, with using much less energy. Humans, in contrast, grow very slowly, have limited brain speeds, and are fragile and expensive. It is very hard to see humans outcompeting artificial hardware at such tasks unless the artificial hardware is just very bad at such tasks. That is in fact the case today, but it would not at all be the case with ems, nor with other AI with similar general mental abilities.

GD Star Rating
Tagged as: ,
Trackback URL:
  • J

    Technological unemployment is a 200-year old meme that happens to be really handy right now for governments that want more control over the internet and don’t want to be blamed for unemployment. Baxter the robot is not taking your job.

    When AI or ems come along and disrupt things enough to put us out of work, they’ll also be destroying what we currently recognize as government, the global military balance of power, the economy, what it means to be human, and a bunch of other things. Unemployment will be low on the list of things to worry about.

    • J

      (cont.) The post is otherwise perfectly fine and interesting. It’s just jarring to me when I see people talking about a vastly different world (even if it’s fast approaching) in terms of just one minor aspect of it.

      And it’s fine to look at one aspect at a time, but jobs isn’t really the central point here; it’s that humans aren’t going to compete with ems or AIs in general. They won’t just be out-working us. They’ll be out-strategizing us and out-consuming us and out-politicizing us and a bunch of other things.

      • I presume you will grant that my book does present a future scenario where are great many important things change, not just unemployment for humans.

      • J

        Certainly. My crusade is limited to “technological unemployment” as a sloppy idea tied to too many wildly different concepts.

      • Joe

        Relatedly, I feel a little uncomfortable with how commonly those arguing for the upcoming existence of technological unemployment advocate not just wealth redistribution, but a basic income scheme specifically. It seems to me that whether it will be necessary to redistribute wealth to large numbers of people who cannot get work is an issue on a separate level to how exactly that redistribution ought to be done, if it is done.

        Basic income as a redistribution scheme has some interesting arguments in favor of it, but it just seems implausible to me that all the futurist folk advocating it would have first realized that redistribution would be necessary, and then separately weighed up the options and decided basic income is the best scheme. I can’t help but think the fact that it just sounds futuristic and Star Trek-y has something to do with the focus on this particular approach to redistribution. Basic income just sounds like something the future will have, like flying cars.

      • mlhoheisel

        It’s not especially futuristic as it has roots going back a couple centuries. It’s one of the items on the list of Enlightenment ideals that hasn’t been achieved yet. Enlightenment intellectuals in 1790 certainly understood that Slavery wasn’t acceptable in just liberal democratic society, even if it wasn’t clear how it would be eliminated.

        The ongoing project of progressively rolling out the enlightenment includes recognizing some sort of Georgist ideal that everyone ought to have a share in common resources that in political economy are called “land”. IP, EM spectrum, etc are “land”.

        This isn’t a matter of redistribution as much as correcting the glaring maldistribution or injustice of allowing private interests to completely control assets and income streams that rightfully belong to everyone.

  • marshall bolton

    “Humans, in contrast, grow very slowly, have limited brain speeds, and are fragile and expensive.” This is of course a feature not a bug. Economic analysis makes it a bug. I feel foolish when I say “Sorry” to Siri, when it has bungled a request or command. In my endeavor to remain human I no longer use Siri.

  • Is what a commenter recently called “technical AI” a third way?

  • Greg Perkins

    In the limit of time that’s correct, but I don’t think that people are most concerned with t->inf, they’re concerned with the prospects for their own lives.

    One heuristic for minimizing value drift is maintaining continuity, and that’s almost certainly what people are trying to suggest with the third way. I expect while we’re still trying to get our hardware and software built to sustain em’s, we’ll be growing ourselves into beings that can more comfortably make the leap.

    Putting an agency rich substrate near to human brains (below the latency horizon of sensing and thought) and letting the brains do what they do best, at least some parts of the human beings will grow into the new substrate. At some point, unless there’s really an unsurpassable serial biological bottleneck, we’ll end up as em’s anyway, as you suggest.

    But we should be aware of the possibility that the process is not inherently lossless (and the expected probability of lossage depends on your priors and life experience.) This scenario help to frame the question: what might we care deeply about that could be forgotten and left behind even in such a gradual migration?

    • But why can’t chips in far away server racks be still below the latency of sensing and thought for assisting with most practical purposes?

      • Definitely!

        To what extent do we expect there to be high bandwidth in and out of existing brains during the transition? If we expect slice/scan uploading, then there’s not really much time to check whether we’ve left anything behind.

        If we find a more gradual transition, it could be experienced as a series of increased capabilities with the available opportunity to evaluate retrospective regret.

        The added expense is figuring out how to get much higher two-way bandwidth through the skull in ways that can interact with distributed or silicon-based services, But since it means we get some kind of capability increase faster (even if it’s not yet a full em), it seems likely that we’ll have some measure of progress here before the full phase change occurs.

    • marshall bolton

      “The expected probability of lossage depends on your priors and life experience.” Yes! This suggests a little reverse engineering such that we ask what are the priors and life experiences of those who support Ems (and such like)? And I am left thinking that it is the “Less Wrong” contingency. Mostly young rationalists. And they (based on my priors and life experience) don’t know nothing.

  • pgbh

    Up until now, your “third way” is pretty much the only way machines have contributed to economic growth. So it seems premature to predict that it won’t play a role in the future also.

  • Riothamus

    Since all three processes are active areas of R&D, why isn’t the discussion about how they will interact, rather than this ‘there can only be one’ argument?

    I now think of this as the Highlander Fallacy.

  • Pingback: Overcoming Bias : Fuller on Age of Em()