Computing Cost Floor Soon?

Anders Sandberg has posted a nice paper, Monte Carlo model of brain emulation development, wherein he develops a simple statistical model of when brain emulations [= “WBE”] would be feasible, if they will ever be feasible:

The cumulative probability gives 50% chance for WBE (if it ever arrives) before 2059, with
the 25% percentile in 2047 and the 75% percentile in 2074. WBE before 2030 looks very unlikely and only 10% likely before 2040.

My main complaint is that Sandberg assumes a functional form for the cost of computing vs. time that requires this cost to soon fall to an absolute floor, below which it will never fall, relative to the funding ever available for a brain emulation project. His resulting distribution has costs approaching this floor by about 2040:

SandbergTimingModel

As a result, Sandberg finds a big chance (how big he doesn’t say) that brain emulations will never be possible – for eons to follow it will always be cheaper to compute new mind states via floppy proteins in huge messy bio systems born in wombs, than to compute them via artificial devices made in factories.

That seems crazy implausible to me. I can see physical limits to physical parameters, and I can see the rate at which computing costs fall slowing down. But having the costs of artificial computing soon stop falling forever is much harder to see, especially with such costs remaining far higher than the costs of natural bio devices that seem pretty far from optimized. And having the amount of money available to fund a project never grow seems to say that economic growth will halt as well.

Even so, I applaud Sandberg for his efforts so far, and hope that his or others’ successor models will be more economically plausible. It is an important question, worthy of this and more attention.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • Anders Sandberg

    Well, the “soon” part is debatable. I don’t think you should make *too* much of it in this version of the paper – I will soon update it with more up-to-date Moore’s law data (Nordhaus 2007) and that might shift things.

    The real reason the floor occurs soon is that this is what typically happens when you fit a sigmoid to a noisy dataset that has not yet had a clear inflection point. I played around with this a while ago ( http://www.aleph.se/andart/archives/2011/05/why_i_dont_trust_hubbert_peak_arguments.html ) and found that the sigmoid predictions based on pre-inflection point data typically predict inflection points close to the end of the data set (“soon”) or have a long, thin tail of very optimistic (in the case of Moore’s law) scenarios.

    In the end, my paper is not trying to make a great prediction of Moore’s law but rather to generate a not too implausible set of scenarios to drive the rest of the analysis. Maybe they are too pessimistic (the horizontal asymptote of the sigmoid is typically many orders of magnitude below the 10^40 ops/kg/s estimate of Seth Lloyd for molecule-based computing) but this can be seen as a conservative assumption. As more data arrives, they can be improved.

    • http://overcomingbias.com RobinHanson

      I agree that future development of the model may moderate this aspect I don’t like. I still think using a sigmoid functional form is a big part of the problem. But even so: good work.

      • http://www.aleph.se/andart Anders Sandberg

        Thanks! What functional form would you like to try?

      • http://overcomingbias.com RobinHanson

        How about a sigmoid in growth rates between initial and final positive growth rates?

      • http://www.aleph.se/andart Anders Sandberg

        That is essentially a piece-wise exponential model. Not too hard to fit (especially since the Nordhaus 2001 data is essentially a constant up to 1940 and then a single exponential).

        I am somewhat concerned about it reaching unphysical levels quickly: while 10^40 ops/kg/s looks plausible as a limit, we actually do not have great existence proofs of anything close to it, not any sense of how likely computation beyond it would be. If exponential growth never stops even absurdly computationally expensive scenarios (simulate every atom in the brain) will eventually arrive.

        Still, I might do a fit and have it in an appendix: a bit of perturbation testing of how much conclusions change when the model changes is always healthy.

      • http://overcomingbias.com RobinHanson

        Ah, I wasn’t thinking about pre-1940 data. Then how about a model with three growth rates, and sigmoid transitions between them?

        Cost per computation could keep falling even of computation per atom hits a limit.

      • http://www.gwern.net/ gwern

        > Then how about a model with three growth rates

        Or better yet, could one take the http://pcdb.santafe.edu/ and use random selection from its curves?

        That is, exclude all the curves which are arguably caused by Moore’s law (which is probably everything in ‘Information/Communication’), pick one at random, take its best-fit line, and use that as the post-sigmoid growth rate.

      • Anders Sandberg

        If I remember right, Bela Nagy tested a largish set of curves against this data – exponentials, hyperbolic curves, various other power laws. I’ll dig up the paper, but I think the conclusion was that it pointed towards a combination of Wrightean learning and expanding production producing a power law.

      • Anders Sandberg

        I played around with piecewise exponentials today, and found a problem with them. If you give the curve the freedom to bend at a future point in time where there is no data, then of course it can turn in any direction without producing a bad mean square error. So the best we can do in this case is to assume the most recent growth rate continues forever. That seems a bit unsatisfactory.

        One approach that might be doable is to get expert opinion about possible upper limits to computation and use that as a distribution of the endpoint of a sigmoid. How much one should trust such opinion is of course another matter, but I am already using the WBE meeting expert opinion about the necessary resolution.

      • http://overcomingbias.com RobinHanson

        There was a slow growth rate before 1940, then a Moore’s law rate after that. I’d think that if the gains have been slowing down lately, then the third term would fit that. But the data won’t determine when it stops slowing down and settling to a new third growth rate. So you could pick a distribution over that new third growth rate and do the analysis that way.

  • IMASBA

    “As a result, Sandberg finds a big chance (how big he doesn’t say) that brain emulations will never be possible – for eons to follow it will always be cheaper to compute new mind states via floppy proteins in huge messy bio systems born in wombs, than to compute them via artificial devices made in factories.”

    That would indeed be odd. Even if biological computers are the most efficient computers possible we could eventually learn how to make them in factories and program and interface with them as we wish, the end result still being something mass-produced that could run an EM.

    • http://www.aleph.se/andart Anders Sandberg

      Yes, obviously a human-like mind can be run on 1.4 liters of the right kind matter. However, the particular scenario of scanning/interpretation/simulation may require far more overhead than just making brains de novo.

      • IMASBA

        EMs come with memories so you save on education and upbringing costs (multiple times because you are guaranteed the individual you want while raising children does not guarantee they end up like you want them to), plus it’s a vanity thing: people wanting to make copies of themselves.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        plus it’s a vanity thing: people wanting to make copies of themselves.

        At the expense of their personal existence? (I understand that copying will almost certainly be destructive.)

      • Ronfar

        If you’re dying of cancer or something, even a destructive copy might be worthwhile.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        Will people still be dying of such things a century out?

      • IMASBA

        They’ll be dying of something, for this discussion it doesn’t matter whether that’s cancer at age 80 or some new form of dementia at age 150.

      • IMASBA

        There are quite a lot of people who believe the EM would be “them”. Though of course none of them have ever had to put that belief to the test.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        They may think an EM would be “them” and still not want to become EMs: their minds may be the same but their bodies are not. Even bracketing the economics, will persons flourish as ems to the degree that they might as humans? Will being an EM (as opposed to being a human) ever be an enviable state?

      • IMASBA

        Well, old people might see it as an option: incredibly long life, no more health issues. I’m sure Robin Hanson would want it. If it’s technologically possible it will happen.

      • IMASBA

        In any case EMs are possible, we are living proof of that, so any model that assignes a non-zero chance to the impossibility of EMs must be taken with a grain of salt (perhaps a very small grain of salt if the other attributes of the model are very useful and based on the fact that many useful functions have infinitely long tails, but a grain of salt nevertheless).

  • QM

    IIRC, there’s some research on reversible computing to the effect that computing is theoretically reversible, recovering all of the energy, to within an arbitrarily small epsilon. This has to reconcile with the floor of energy required to “read” a bit, before reversing the computation — which I don’t recall any specific of, but Heisenberg’s probably comes into play, though there are probably some tricks even there.

    So I find it difficult to believe there will be a computing cost floor for a very long time.

    • VV

      This seems to be a non-sequitur.
      Just because near-reversible computers are theoretically possible, in the same sense that near-perpetual motion machines are theoretical possible, it doesn’t mean that the practical cost of computation is not going to floor anytime soon.

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    Don’t Robin’s social forecasts depend on the development of nondestructive copying methods? Whereas, Sandberg believes scanning will be destructive.

    Does the similarity of their time estimates conceal that they are forecasting fundamentally different inventions: Sandberg, destructive copying; Hanson, nondestructive copying?

    • http://www.aleph.se/andart Anders Sandberg

      I don’t make any assumptions about the destructiveness. In fact, I don’t see how it would change my model. (One could argue that nondestructive scanning is much harder and would require a far longer development time than destructive scanning, hence pushing estimates future-ward. But then one would need to explain why there is no use of earlier easier destructive methods.)

      Also, I think Robin doesn’t base his models on nondestructive scanning either.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        I don’t make any assumptions about the destructiveness. In fact, I don’t see how it would change my model…. Also, I think Robin doesn’t base his models on nondestructive scanning either.

        It’s not completely clear to me whether Robin assumes nondestructive scanning, but his rhetoric seems to imply it. He speaks of people’s incentives to make copies of themselves rather that turning themselves into copies. He hasn’t (to my knowledge) explored whether destructive scanning leads to his societal conclusions: would destructive scanning eventuate in an EM society?

      • IMASBA

        You only need 1 initial “sacrifice”, after that you can keep copying and altering traits to end up with a large and diverse EM-population. Perhaps an initial sacrifice isn’t even needed: constructing an EM from the ground up might be possible.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        One way destructive-only scanning alters the landscape is that it means the interests of the mass of organics is pitted against the EMs’ interests.

  • Pingback: Web Roundup: More Links for March - My blog