Unauthorized Topics

Tyler posted:

Do I think Robin Hanson’s “Age of Em” actually will happen? A reader has been asking me this question, and my answer is…no! Don’t get me wrong, I still think it is a stimulating and wonderful book. .. But it is best not read as a predictive text, much as Robin might disagree with that assessment. Why not? I have three main reasons, all of which are a sort of punting, nonetheless on topics outside one’s areas of expertise deference is very often the correct response. Here goes: 1. I know a few people who have expertise in neuroscience, and they have never mentioned to me that things might turn out this way.

I titled my response Tyler Says Never Ems, but on twitter he objected:

“no reason to think it will happen” is best summary of my view, not “never will happen.”
…that was one polite way of saying I do not think the scientific consensus is with you on this issue…

I responded:

How does that translate into a probability?
You have to clarify the exact claim you have in mind before we can discuss what the scientific consensus says about it.

But all he would answer is:

“Low”?

Now at GMU econ we often have academics who visit for lunch and take the common academic stance of reluctance to state opinions which they can’t back up with academic evidence. Tyler is usually impatient with that, and pushes such visitors to make best estimates. Yet here it is Tyler who shows reluctance. I hypothesize that he is following this common principle:

One does not express serious opinions on topics not yet authorized by the proper prestigious people.

Once a topic has been authorized, then unless a topic has a moral coloring it is usually okay to express a wide range of opinions on it; it is even often expected that clever people will often take contrarian or complex positions, sometimes outside their areas of expertise. But unless the right serious people have authorized a topic, that topic remains “silly”, and can only be discussed in a silly mode.

Now sometimes a topic remains unauthorized because serious people think everything about it has a low probability. But there are many other causes for topics to be seen as silly. For example, sex was long seen as a topic serious people didn’t discuss, even though we were quite sure sex exists. And even though most everyone is pretty sure aliens must exist out there somewhere, aliens remain a relatively silly subject.

In the case of ems, I interpret Tyler above as noting that the people who seem to him the proper authorities have not yet authorized serious discussion of ems. That is what he means by pointing to experts, saying “no reason” and “scientific consensus,” and yet being unwilling to state a probability, or even clarify which claim he rejects, even though I argued a 1% chance is enough. It explains his initial emphasis on treating my book metaphorically. This is less about probabilities, and more about topic authorization.

Compare the topic of ems to the topic of super-intelligence, wherein a single hand-coded AI quickly improves itself so fast that it can take over the world. As this topic seems recently endorsed by Elon Musk, Bill Gates, and Steven Hawking, it is now seen more as an authorized topic. Even though, if you are inclined to be skeptical, we have far more reasons to doubt we will eventually know how to hand-code software as broadly smart as humans, or vastly better than the entire rest of the world put together at improving itself. Our reason for thinking ems eventually feasible is far more solid.

Yet I predict Tyler would more easily accept an invitation to write or speak on super-intelligence, compared to ems. And I conclude many readers see my book primarily as a bid to put ems on the list of serious topics, and they doubt enough proper prestigious people will endorse that bid. And yes, while if we could talk probabilities I think I have a pretty good case, even my list of prestigious book blurters probably aren’t enough. Until someone of the rank of Musk, Gates, or Hawking endorses it, my topic remains silly.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • Joseph Miller

    If Cowen is looking for a “scientific consensus” for a multi-disciplinary series of claims about some point in the future, I think he’ll have to wait until that future unfolds. Possibly longer.

    Maybe I’m out of my league, but so far into the book, Age of Em seems very plausible. I’d also like to know what in particular Cowen doubts.

  • Elliot Olds

    Good post. One tangential comment:

    “we have far more reasons to doubt we will eventually know how to hand-code software as broadly smart as humans”

    I see you refer often to creating general AI without ems as “hand-coding” intelligence. I’ve commented before about how your writing on AI doesn’t distinguish between machine learning and ‘good old fashioned AI’ which focused on explicit coding of rules for intelligence. It used to be that machine learning systems required humans to hand-craft “features” that the computer would learn. The most modern machine learning methods no longer require humans to create features.

    So it’s misleading to describe the kind of machine learned systems that might lead to general AI as needing to have their intelligence “hand coded.” The thing that is coded will be the learning algorithm, but all the actual intelligence will arise from data/experience.

    • http://overcomingbias.com RobinHanson

      Machine learning systems are also hand-coded, at least for now. In most any AI system the system itself is capable of creating things that a human could also have added directly. But there is always a part of the system that it couldn’t have added to itself.

      • Elliot Olds

        Parts of current machine learning systems are still coded by humans, but my point is that it’s no longer the “content” of intelligence that is coded, but just a very general learning framework.

        For instance, consider the DeepMind system that can play ~50 Atari games. In traditional machine learning, humans would have to define a bunch of features, then the learning algorithm would take those feature values as input. Figuring out the best features was difficult work that involved quite a lot of human labor and insight. In the DeepMind case, an example feature might be “is any moving object on a course that will collide with my character in the next 2 seconds?” You can train an Atari playing system by defining and manually coding up hundreds or thousands of such features, hoping that the combination is enough that your model can learn how to play the game well.

        How DeepMind’s Atari system actually works is that the only input to its learning algorithm are the pixel values on the screen. It is trivial to write the code to give the learning system the pixel values. The input “features” are identical for every Atari game. The amount of work replaced by not having to manually define features is huge. So none of the intelligence about how to play the game is hand coded. (I think the only other hand coded part is some function that extracts the score from the screen).

        This is a huge shift in how AI systems are built, and this is the distinction that I see you not acknowledging when you talk about non-em AI involving “hand coding” intelligence.

  • Joe

    On the old AI foom question… I’m not sure I understand exactly what it is you have long been referring to as ‘content’. Is the (vast quantities of) training data fed into machine learning algorithms the content? Or, is content more about the question of which implementation of which algorithm to use for each problem, what asssumptions are made, what simplifications, etc.? Or, third possibility, is content something more like the tools and techniques and so on that we, or an AI system, use to interact with the world?

    On a related note, I was interested to see this recent talk by Peter Norvig, in which he argues for the need to develop techniques for modifying machine learning software in a precise, targeted, understandable way, in the way that traditional software can be modified. He specifically mentions machine learning’s non-modularity as a source of difficulty in this area. I’m not sure whether increasing modularity in ML will be possible or feasible, but perhaps if it is then this will lead to more of the code reuse, standards-following, interoperability, and so on, that you have argued you expect to see in a world where AI is as widely used as traditional software is today.

  • Robert Koslover

    Now this is some serious (cynical yes, but “rings-of-the-truth” accurate) wisdom:
    “One does not express serious opinions on topics not yet authorized by the proper prestigious people.”
    You know, I’ve experienced this. But worse, I suspect that I have also contributed to the enforcing of it, on at least some occasions.

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      Except it doesn’t seem to me a matter of topic. It isn’t unsafe to express an opinion on ems. It’s not the topic but the opinion itself that is taboo.

  • Matthew Light

    Maybe the DAO hacker is really the “AI Foom” making its first actions in the physical world by funding itself through ETH puts?

    🙂

    Yes, I see neither “AI Foom” nor Ems as being remotely likely. I am, however, quite partial to a variation of the simulation argument.

  • static

    I think one challenge is that there are so many assumptions, it’s hard to know which is going to be the hardest. To me ems are going to require several advances I can’t imagine yet. I believe the most challenging is the non-destructive brain inspection required to create an em. So the assumption on p 47, that it will be possible to scan a brain at the proper level of resolution leaves me lost as to the overall believability of the rest of the book, much like my opposition to the “transporter” ruins star trek for me.

    It will require several major advances in physics, and uncertainty principle like paradoxes to overcome in terms of observing the central nervous system without changing its state. If the relative and continuous levels of electro-chemical activity are important, which they look to be, the entire observation may have to be made at one moment.

    To me, a more interesting route is th concept of cognitive enhancement, where our computational capacity is expanded via additions to our brain, wired to the nervous system. As they become more sophisticated, what it means to be us would be more encompassed in the peripherals than in our meat brain-even the motivation for our behavior (simplistically things like release of dopamine). These peripherals could even have a shared component, losing the concept of individual identity. The meat brain could then die off without losing all of what it is to be us. In this case, the things that would motivate the action of those brains would likely be wholly different than our current motivations, and thus gets me to some of the scenarios you consider.

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      I don’t think non-destructiveness is required. There are enough people who wouldn’t conceive of their biological destruction as death.

  • Evan Gaensbauer

    Has Tyler Cowen actually commented much on the topic of superintelligence? I didn’t find anything clear on the first page of Google results. If so, Elon Musk, Stephen Hawking, and Bill Gates, as scientists or entrepreneurs, each have reputations for being geniuses, but that’s because of the status they’ve earned, not because of expertise in artificial intelligence. Either way, have you asked Dr. Cowen what he would consider an authority on ems?

    He said the neuroscientists he talks to never mention it, but it’s not as if neuroscientists would ever go out of their way to talk about ems. Brain-scan computer uploads happening in one hundred years aren’t relevant to their research field, or on their horizon as career scientists. Why would they bring it up for no reason? Also, IIRC, Daniel Levitin is a neuroscientist Dr. Cowen cited in his post, but Dr. Levitin already said ems could or would happen, just later than you predicted. What other experts might Dr. Cowen consider authorities on ems?

    If there isn’t special reason to think they’d be experts, i.e., they’re not computer scientists or neuroscientists, is he just waiting for a sufficiently high-status “smart-person/genius” to come forward praising your book?

    • http://overcomingbias.com RobinHanson

      No Tyler hasn’t commented AFAIK, which is why it made sense for me to make a prediction on what he would say.

  • Lord

    The future can be a lot different than we can know. For example, our quasi intelligent agents will probably remove any need for ems, though I think there would still be great incentive for the challenge of immortality that would outweigh it being non economic. And knowing enough to do it may still leave a biological solution preferred.