Tyler Says Never Ems

There are smart intellectuals out there think economics is all hogwash, and who resent economists continuing on while their concerns have not been adequately addressed. Similarly, people in philosophy of religion and philosophy of mind resent cosmologists and brain scientists continuing on as if one could just model cosmology without a god, or reduce the mind to physical interactions of brain cells. But in my mind such debates have become so stuck that there is little point in waiting until they are resolved; some of us should just get on with assuming particular positions, especially positions that seem so very reasonable, even obvious, and seeing where they lead.

Similarly, I have heard people debate the feasibility of ems for many decades, and such debates have similarly become stuck, making little progress. Instead of getting mired in that debate, I thought it better to explore the consequences of what seems to me the very reasonable positions that ems will eventually be possible. Alas, that mud pit has strong suction. For example, Tyler Cowen:

Do I think Robin Hanson’s “Age of Em” actually will happen? … my answer is…no! .. Don’t get me wrong, I still think it is a stimulating and wonderful book.  And if you don’t believe me, here is The Wall Street Journal:

Mr. Hanson’s book is comprehensive and not put-downable.

But it is best not read as a predictive text, much as Robin might disagree with that assessment.  Why not?  I have three main reasons, all of which are a sort of punting, nonetheless on topics outside one’s areas of expertise deference is very often the correct response.  Here goes:

1. I know a few people who have expertise in neuroscience, and they have never mentioned to me that things might turn out this way (brain scans uploaded into computers to create actual beings and furthermore as the dominant form of civilization).  Maybe they’re just holding back, but I don’t think so.  The neuroscience profession as a whole seems to be unconvinced and for the most part not even pondering this scenario. ..

3. Robin seems to think the age of Em could come about reasonably soon. …  Yet I don’t see any sign of such a radical transformation in market prices. .. There are for instance a variety of 100-year bonds, but Em scenarios do not seem to be a factor in their pricing.

But the author of that Wall Street Journal review, Daniel J. Levitin, is a neuroscientist! You’d think that if his colleagues thought the very idea of ems iffy, he might have mentioned caveats in his review. But no, he worries only about timing:

The only weak point I find in the argument is that it seems to me that if we were as close to emulating human brains as we would need to be for Mr. Hanson’s predictions to come true, you’d think that by now we’d already have emulated ant brains, or Venus fly traps or even tree bark.

Because readers kept asking, in the book I give a concrete estimate of “within roughly a century or so.” But the book really doesn’t depend much on that estimate. What it mainly depends on is ems initiating the next huge disruption on the scale of the farming or industrial revolutions. Also, if the future is important enough to have a hundred books exploring scenarios, it can be worth having books on scenarios with only a 1% chance of happening, and taking those books seriously as real possibilities.

Tyler has spent too much time around media pundits if he thinks he should be hearing a buzz about anything big that might happen in the next few centuries! Should he have expected to hear about cell phones in 1960, or smart phones in 1980, from a typical phone expert then, even without asking directly about such things? Both of these were reasonable foreseen many decades in advance, yet you’d find it hard to see signs of them several decades before they took off in casual conversations with phone experts, or in phone firm stock prices. (Betting markets directly on these topics would have seen them. Alas we still don’t have such things.)

I’m happy to accept neuroscientist expertise, but mainly on in how hard it is to scan brain cells and model them on computers. This isn’t going to come up in casual conversation, but if asked neuroscientists will pretty much all agree that it should eventually be be possible to create computer models of brain cells that capture their key signal processing behavior, i.e., the part that matters for signals received by the rest of the body. They will say it is a matter of when, not if. (Remember, we’ve already done this for the key signal processing behaviors of eyes and ears.)

Many neuroscientists won’t be familiar with computer modeling of brain cell activity, so they won’t have much of an idea of how much computing power is needed. But for those familiar with computer modeling, the key question is: once we understand brain cells well, what are plausible ranges for 1) the number of bits required store the current state of each inactive brain cell, and 2) how many computer processing steps (or gate operations) per second are needed to mimic an active cell’s signal processing.

Once you have those numbers, you’ll need to talk to people familiar with computing cost projections to translate these computing requirements into dates when they can be met cheaply. And then you’d need to talk to economists (like me) to understand how that might influence the economy. You shouldn’t remotely expect typical neuroscientists to have good estimates there. And finally, you’ll have to talk to people who think about other potential big future disruptions to see how plausible it is that ems will be the first big upcoming disruption on the scale of the farming or industrial revolutions.

GD Star Rating
a WordPress rating system
Tagged as: ,
Trackback URL:
  • Dave Lindbergh

    Experts in any field seem to be very unwilling to speculate, or even endorse speculation, about long-term developments in their field.

    I’m not sure why, but here are some ideas:

    * They have a lot to lose as experts if the speculations turn out wrong, and by their nature speculations are…speculative.

    * They are very focused on immediate problems and progress. This is what they’re paid to to do, and where they get their professional prestige.

    * They are more keenly aware than non-experts of the many difficulties there will be in the actual implementation of speculative ideas. While they may know intellectually that these difficulties are not insurmountable in principle, as an expert they’re overwhelmed by the amount of work yet to be done, and tend to assume it’ll never happen.

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      They have a lot to lose as experts if the speculations turn out wrong, and by their nature speculations are…speculative.

      Long-term speculation is hard to falsify until its propounders are safely dead. I suspect this is the reason for reluctance: it may seem a cheap way to get acclaim without empirical responsibility or consequences.

      • Dave Lindbergh

        Another plausible hypothesis.

        Robin in his post today (http://www.overcomingbias.com/2016/06/unauthorized-topics.html) offered two more:

        * “we often have academics who visit for lunch and take the common academic stance of reluctance to state opinions which they can’t back up with academic evidence”

        * “One does not express serious opinions on topics not yet authorized by the proper prestigious people.”

        Of course some of these are supported by each other.

  • Pingback: Overcoming Bias : Unauthorized Topics

  • Tagore Smith

    I don’t find Tyler’s argument very convincing, partially for the reasons you mention, and partially because I think people (me included) aren’t very good at predicting the future.

    I’d like to see evidence that market prices showed signs of the advent of the internet (surely an economically significant event) 100 years ahead of said advent. In fact, the markets hadn’t caught up even in, say, ’93, as far as I can tell. But Vannevar Bush told us it would happen, even if he was wrong about some of the details. So sometimes visionaries are right, and markets lag. Figuring out which visionaries to believe (they’re mostly cranks) is an unsolved problem.

    Will we make ems? I don’t think I know enough to say, either way.

  • DoesntMatter

    I would suggest that most “neuroscientists” aren’t really in a position to say what is or isn’t possible even within the field of neuroscience because most of those who i have worked with or went to their talks have little clue on the physics that underlies these systems, the physics of the tooling that they rely on, nor even able to process the raw data collected from such systems. It is a joke. And this is coming from someone who does not have a degree yet managed to work in a research lab and get published in neuroimage earlier this year just because I could do the work while not having to worry about my living situation or my next meal and thought it would be interesting (how often does one get the opportunity to write software for brain waves?).

    – The cells in the human body are at min ~2 microns in size, beamforming techniques with eeg and mni head models can allow for ~1mm spacial precision (http://dx.doi.org.sci-hub.cc/10.1016/j.neuroimage.2007.04.054) with a high enough sensor count/optimum positioning for roi locations, not even including adding other techniques for decorrelating signal from noise, which can drop you into the 10-100s of microns range today.

    – Depending on sampling rate and sensors (at least in EEG), you have anywhere between 10’s of MB to 100’s of GB data collected per min, but I think it is a red herring because depending on how you represent or process the data in real time you could collect less while still having very accurate and precise measurements for any given threshold. I’ve constructed beamformers from datasets where the raw data was around 2GB per min by “training” on higher order moments where I only needed to store 10MB of it to get similar weights. I think fMRI is good for structural imaging but plenty of that data is already open to the public by some universities which is useful for the setting up the problem, but temporal resolution is shit way too expensive and impractical for future everyday use (where you’d want to be collecting data to model such human behaviors anyway).

    – Any talk of algorithms/libs for computing any of the above will be lost of most neuroscientists who are far more worried about their next paper, grant, talking to some “hot” tech company like magicleap (who haven’t shipped a product yet pitch all the amazing things they can do…), not in actually trying to achieve some tangible goal that might matter to most people. For example even in the paper I cited above they talk about how “hard” it is to do pca or ica for noise decorrelation, yet even in 2007 you had algorithms that could do such in efficient ways like Jacobi-davidson method( http://netlib.org/utk/people/JackDongarra/etemplates/node340.html) and today we have and computing libraries around where the Jacobi-davidson method is used in armadillo (http://arma.sourceforge.net/), where you can link agaisnt libs like openblas for parallelism out of the box, and stuff like this has been around for years! People in the computational physics or chemisty commutiny are much more better to tackle some of these problems, but they are more concerned with their respective fields.

    – I think there needs to be more experimentation thats not driven by academia status games, but the tooling is expensive now. I’m working open source hardware/software (https://github.com/cinquemb/EEGaqui_fpga_headstage) on a project now that is a fork of system (https://open-ephys.atlassian.net/wiki/display/OEW/Acquisition+Board) that is priced between $2-5k where i want to bring such capabilities down to around $500 (including electrode costs, but no cap), which should make it in the level hobbyists can explore more unguided/freely with it, but mostly in my free time when I dont feel burned out by life (would be awesome if others could help). The only market force that I can see now is people trying to upsell headphones with one or two electrodes in them by marketing them for making people feel “calm” or whatever new age bullshit, but a lot of academics see money and are in conversations with such. Personally I see it as a distraction because most people aren’t interested in paying for such at those prices now, do not demand for products like them now, and nothing suggest they would want to in the future, but people pay a lot for video games and I can see that people would want a hands free experience why playing those which is why I’m more interested in the technology to applying to the gaming market vs the “old” people market of meditation retreats and such (too in person, too fractured, doesn’t scale, doesn’t help people collect data in “real” world to better try to understand these processes).

    I could go on and on, but I’ll stop here today 😛

  • Daniel Levitin

    I agree with Dr. Hanson that it’s a matter of when, not if, and I think that most neuroscientists would agree, too. As pointed out, we already have done something like this for the human auditory system with cochlear implants. For space, a number of things I originally wrote in my review got edited out. Regarding the timing of the age of em, I wrote “I’m only quibbling about the time scale not the substance of the prediction.”

    I also mentioned a practical, personal reason for wanting the age to come in my lifetime. “I’ve been struggling with a certain guitar
    passage. I’ve seen other people play it, I’ve watched instructional videos. could know what it feels like for Stephen Stills to
    play it, I could jump start my fingers and brain to actually do it.”

  • Charlene Cobleigh Soreff

    “I’m happy to accept neuroscientist expertise, but mainly on in how hard it is to scan brain cells and model them on computers.”

    I’m skeptical about the “scan” part, not the “model” part.

    Eventually, I expect that modeling brain cells will be possible to
    whatever degree of accuracy we need. We already use
    simulated neural nets for economically viable applications.

    To my mind, the question is whether it will be cheaper to
    train neural nets or to scan an existing person’s brain in order
    to acquire economically useful expertise. I think training will be
    cheaper. We already do it and find it useful, while scanning
    requires developing a whole new technology with very fine-grained
    resolution. In principle, we could develop a scanning technology.
    In practice, I think it will lose the race with training neural nets
    (and other AI techniques providing similar capabilities)