Whither Manufacturing?

Back in the 70s many folks thought they knew what the future of computing looked like: everyone sharing time-slices of a few huge computers.  After all, they saw that CPU cycles, the main computing cost, were cheaper on bigger machines.  This analysis, however, ignored large administrative overheads in dealing with shared machines.  People eagerly grabbed personal computers (PCs) to avoid those overheads, even though PC CPU cycles were more expensive. 

Similarly, people seem to make lots of assumptions when they refer to "full-scale nanotechnology."  This phrase seems to elicit images of fridge sized home appliances that, when plugged in and stocked with a few "toner cartridges", makes anything a CAD system can describe, and so quickly and cheaply that only the most price-sensitive folks would consider making stuff any other way.  It seems people learned too much from the PC case, thinking everything must become personal and local.  (Note computing is now getting less local.)  But there is no general law of increasingly local production.

The locality of manufacturing, and computing as well, have always come from tradeoffs between economies and dis-economies of scale. Things can often be made cheaper in big centralized plants, especially if located near key inputs.  When processing bulk materials, for example, there is a rough 2/3 cost power law:  throughput goes as volume, while the cost to make and manage machinery tends to go as surface area.  But it costs more to transport products from a few big plants.  Local plants can offer more varied products, explore more varied methods, and deliver cheaper and faster. 

Innovation and adaption to changing conditions can be faster or slower at centralized plants, depending on other details. Politics sometimes pushes for local production to avoid dependence on foreigners, and at other times pushes for central production to make succession more difficult. Smaller plants can better avoid regulation, while larger ones can gain more government subsidies.  When formal intellectual property is weak (the usual case), producers can prefer to make and sell parts instead of selling recipes for making  parts.

Often producers don’t even really know how they achieve the quality they do.  Manufacturers today make great use of expensive intelligent labor; while they might prefer to automate all production, they just don’t know how.  It is not at all obvious how feasible is "full nanotech," if defined as fully automated manufacturing, in the absence of full A.I.  Nor is it obvious that even fully automated manufacturing would be very local production.  The optimal locality will depend on how all these factors change over the coming decades; don’t be fooled by confident conclusiosn based on only one or two of these factors.  More here.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • michael vassar

    Of the five scenarios described, self-reproduction fairly strongly implies over capacity in the fairly short term. Atom precision fairly strongly suggests self-reproduction while over capacity fairly strongly suggests that general plants will be favored.

    Local manufacture seems to me to be almost irrelevant to the radicalness of molecular nanotechnology but rather to be a feature of a story about the future that Drexler invented accidentally and accidentally coupled to the meme complex he generated around nanotechnology. I can’t see any strong argument for it, while I can see many strong arguments against it such that it actually seems less likely to me after MNT than it would today if I didn’t alreay know that today manufacture is highly non-local. However, if MNT greatly changes the world I should spread my confidence intervals a lot about what the world after it is integrated will look like.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    I have no objection to most of this – the main thing that I think deserves pointing out, is the idea that you can serve quite a lot of needs by having “nanoblocks” that reconfigure themselves in response to demands. I’d think this would be a localizing force with respect to production, and a globalizing force with respect to design.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, the less local is manufacturing, the harder it will be for your super-AI to build undetected the physical equipment it needs to take over the world.

  • David

    “This phrase seems to elicit images of fridge sized home appliances that, when plugged in and stocked with a few “toner cartridges”, makes anything a CAD system can describe”

    From what Ive read I get the impression that nobody working in nanoscience today takes this idea seriously. See for example chris phoenix phillip moriarty debate

  • http://liveatthewitchtrials.blogspot.com/ davidc

    Does individual manufacturing even require nanotech? Something like reprap* offers the possibility of similar things. The trade offs described between big and small manufacturers still hold, but the motto of these rapid prototypers “Wealth without money…” does indicate a serious change in mindset.

    *http://reprap.org/bin/view/Main/WebHome

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Robin, a halfway transhuman social intelligence should have no trouble coming up with good excuses or bribes to cover nearly anything it wants to do. We’re not talking about grey goo here, we’re talking about something that can invent its own cover stories. Current protein synthesis machines are not local – most labs send out to get the work done, though who knows how long that will stay true – but I don’t think it would be very difficult for a smart AI to use them “undetected”, that is, without any alarms sounding about the order placed.

  • http://knol.google.com/k/james-miller/james-miller/1j9f9ffxxeue5/1# James Miller

    Robin,

    But centralized production would make it easier for a military with a smart AI to build undetected the physical equipment it needs to take over the world.

  • billswift

    Remember what you wrote about administrative overheads? Many people who try to actually do things are often frustrated by the lousy quality and availability of tools to do it with. I don’t expect small nanofactories to become common for a very long time, but larger “home shop” type machines that possibly use some specialized nano components are quite likely when they become possible for custom work and prototyping.

    In fact, I agree with an aticle I read about 8 or ten years ago, nanofactories will be just as specialized in the beginning as ordinary factories. The fully general assemblers are quite a bit longer term.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, it might take more than a few mail order proteins to take over the world.

    James, if there are only three shipyards capable of assembling state-of-the-art ships, then one need only monitor those three yards to detect new ships. If there are thousands of top shipyards, however, monitoring will be lots harder.

  • http://knol.google.com/k/james-miller/james-miller/1j9f9ffxxeue5/1# James Miller

    Robin,

    You are right if (a) the monitor is the government and (b) you need only one yard regardless of the yard’s size. But what if (1# the primary chance of disclosure comes from the press or from an inside whistleblower and #2# you need a certain percentage of industry capacity? In this case detection is more likely with diffuse production.

    Also, if it is the military trying to secretly build ships #and there were no monitors that had the legal right to inspect) then it would be easier if there were a small number of big yards.

    The chance of a secret being kept decreases exponentially as the number of people in on the secret increases.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    James, I’m not following you. Larger centralized shipyards should contain more potential whistleblowers.

    Robin, why does it realistically take more than a few mail order proteins to take over the world? Ribosomes are reasonably general molecular factories and quite capable of self-replication to boot.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, I guess I’m just highlighting the extreme degree of intelligence postulated, that this week old box that has made no visible outside mark beyond mail-ordering a few proteins knows enough to use those proteins to build a physically-small manufacturing industry that is more powerful than the entire rest of the world.

  • William Newman

    Robin: Once you grant the premise of a computer so brilliant at protein design that it can crank out hundreds or thousands of protein designs that it wants synthesized, then I think you should grant that it might only need to mail order one protein. E.g., design and mail-order a gene which you splice into a bacteriophage, after which the bacteriophage synthesizes a plasmid base by base as you direct by toggling the color of ambient light and/or ambient temperature. Once you have such a phage, you should only need a few LEDs, heater filaments, and 1988-tech-level microcontrollers in order to cobble together more synthetic capability (with feedstock = simple nutrient broth) than you could possibly need for bootstrapping.

    However, I don’t think the premise of such an affordable technologically-brilliant AI leapfrogging the rest of the world is very likely. And I think before it becomes likely, we should have some warning. (Chiefly, growing wonder at how cheap it has become to buy compute servers much more powerful than the human brain, and growing frustration at still being completely creatively blocked on getting them to think.)

    Given inspired AI design (probably helped by the AI itself), how much computer hardware oomph would be required to leapfrog current technology so thoroughly that a homebrew basement fab could go into double-every-two-weeks mode while the rest of the world slogs on doubling every twenty years? My guess (with cosmology-style error bars in the order of magnitude) is as much computation as 1000 Einstein or Edison brains running for a year. Right now that’s a seriously expensive amount of hardware. For some years it will remain a pretty expensive amount of hardware. Moore’s Law does seem to be on track to make it affordable, so if we lag enough in figuring out how to use the affordable hardware, then in a decade we might be edging into a supersaturated situation where someone’s basement tinkering could go FOOM, and maybe in two decades the possibility will be constantly weighing on my mind. But my guess is that before we reach that that kind of supersaturation, we’ll have so much AI in so many places that a one-basement event won’t have a chance to stand out, because the world will already be recognizably into singularity, with weirdness doubling on a timescale of weeks. (Or will have dodged the singularity by WWIII or something.)

    Note that one Buffett*5 brain year is probably worth many billions of dollars. Also, having a Bonaparte*5 brain on staff is probably a big force multiplier for military preparations which cost many, many billions of dollars. Thus, well before an under-the-radar organization can afford the computer power to revolutionize the world in their basement, huge organizations are likely to find it cost-effective to spend billions of dollars per genius-brain-year-equivalent for this class of computer.

    It could also be cost-effective to pay that rate for scientific/engineering AI supergenius — I can easily believe that an Edison*5 brain year could be worth many billions of dollars if targeted at the right low-hanging fruit. But I doubt the low-hanging fruit will end up turning industrial production tradeoffs inside out so that all of the next wave of technology emerges from a single basement somewhere.

    I think if someone wants to worry about One Basement To Rule Them All, a more plausible risk is that the low-hanging fruit happen to be technologies that turn security tradeoffs upside down, so that the world security situation is suddenly very fragile. What if the lowest-hanging fruit is a novel way of repurposing artificial cochlea technology, rewiring people’s pleasure centers (or whatever) in such a way so that a fifteen-minute outpatient procedure, less difficult than lasik, completely rewrites anyone’s loyalties? Or an insanely effective biological weapon — uncountable varieties of anthrax on steroids, each with its corresponding cheap effective antiserum for use by your own troops and loyalists, and with various weaponization practicalities (dispersal as ultrafine aerosol, e.g.) solved by the organism itself? It is a bad thing if the obvious way to use a new development is to jump into total war against the rest of mankind, with a decent chance of winning. But that’s not particularly a problem of AI, as far as I can see. We’d face very much the same problem if such destabilizing weaponizable technologies (and not their countermeasures) turned out to be the low-hanging fruit for ordinary no-AI-assistance technological advance.

  • Tim Tyler

    We have fairly a good idea about roughly what molecular manufacturing will look like initially – in terms of the spatial distribution of its outputs – since we have the similar examples of printing and rapid prototyping to work from.

    Printers currently sit on our desks – but if we want a book or a magazine, we usually go to the shops.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Ergh, just realized that I didn’t do a post discussing the bogosity of “human-equivalent computing power” calculations. Well, here’s a start in a quick comment – Moravec, in 1988, used Moore’s Law to calculate how much power we’d have in 2008. He more or less nailed it. He spent a lot of pages justifying the idea that Moore’s Law could continue, but from our perspective that seems more or less prosaic.

    Moravec spent fewer pages than he did on Moore’s Law, justifying his calculation that the supercomputers we would have in 2008 would be “human-equivalent brainpower”.

    Did Moravec nail that as well? Given the sad state of AI theory, we actually have no evidence against it. But personally, I suspect that he overshot; I suspect that one could build a mind of formidability roughly comparable to human on a modern-day desktop computer, or maybe even a desktop computer from 1996; because I now think that evolution wasn’t all that clever with our brain design, and that the 100Hz serial speed limit on our neurons has to be having all sorts of atrocious effects on algorithmic efficiency. If it was a superintelligence doing the design, you could probably have roughly-human-formidability on something substantially smaller.

    Just a very rough eyeball estimate, no real numbers behind it.

  • http://knol.google.com/k/james-miller/james-miller/1j9f9ffxxeue5/1# James Miller

    “James, I’m not following you. Larger centralized shipyards should contain more potential whistleblowers.”

    Let’s say you need 1/3 of the industrial capacity to build a secret ship. If there are three big yards you will need just one of them, so you just have to get the top managers of this plant to keep the secret. But if there are 3,000 ship yards (of equal size) then you would need to have the managers of 1,000 ship yards keep the secret, something that will be very difficult.

    Also, there will be some variance in the cultures of shipyards in terms of how good they are at keeping secrets. So the more shipyards that are in on the secret the more likely it is that you will have one that doesn’t succeed in keeping the secret.

    Let’s say a government designed weapon could either be built by one company or 1,000 independent scientists who collaborate. Wouldn’t it be easier to keep the weapon a secret if it were built by just the one company?

    If you just need one shipyard regardless of its size to build the ship then you would be right.

  • luzr

    “because I now think that evolution wasn’t all that clever with our brain design, and that the 100Hz serial speed limit on our neurons has to be having all sorts of atrocious effects on algorithmic efficiency.”

    I think you might be right. Also, brains were not invented to think, in the first place.

    To me, it all really seems to be the problem of software. We are seeking for “gods algorithm”. My guts feeling is that it will something relatively simple, I bet when somebody finaly finds it, we will all wonder why that have not happened much sooner..

  • Latanius

    Isn’t the question “how to structure our plants efficiently” simply irrelevant given full scale nanotechnology? It’s like “where to put my only mp3 file so that the most people can listen to it”. Isn’t full scale about eliminating scarcity? And a fridge-size fab is probably too big… even evolution could build much smaller self-replicating machines.

    Non-full scale “nano” (which is not fully self-replicating) is not a new thing, see the big silicon fabs. (The self-replicating parts of today’s computers are called “software”.)

    And I wouldn’t worry about watching over big fabs just in case a superhuman AI wants to do something nasty… I’m sure it would be more creative than that.

  • michael vassar

    william newman: Nice analysis. I’m glad to have noticed your posts and would be happy to follow up if you email michael aruna at yahoo dot you guess. Anyway, I roughly agree with your sense that we are about 1-2 decades from supersaturation, but based on what I see from AI research I definitely don’t expect to see AGI on that time frame. I’d be interested in your reasoning WRT expectations, because nothing I’m seeing today seems all that much more impressive, on the software front, from what we had 20 years ago, and Laplace’s law of induction tells me that this implies that I should give at least a 2/3 chance to nothing staggering on the software front in the next 20 years.

  • michael vassar

    To clarify the above, at a best guess human equivalent AGI is probably impossible with 10^4 FLOPS humanly impossible with 10^10 FLOPS and fairly easy (e.g. a large but routine engineering project) with 10^22 FLOPS and trivial with 10^28 FLOPS. When I say supersaturated I mean relative to the likely efficiency and power of AGI that might come out of revolutionary new understanding of cognition, for which I consider Moravek’s and Kurzweil’s estimates to be highly reasonable.

  • Ben Jones

    Eliezer, it might take more than a few mail order proteins to take over the world.

    Viruses are proteins.

  • http://www.emenoh.com James Hatfield

    Large Scale manufacturers will always have an advantage with commodity products. What’s not clear about the future is what Large Scale means. Currently it means centralized manufacturing but in the future we may see more advanced distributed manufacturing.

    Yes, we already distribute our manufacturing, parts are made in one location and shipped off to another for assembly, then drop shipped or warehoused then shipped off to local merchants. With nanotech capabilities and sufficient AI or even without it, we should be able to make this process even more distributed.

    Rather than shipping parts to assembly houses we would be shipping raw nano-materials (the new ‘parts’) to local ‘distribution’ houses which would only need enough space for a few large production units. Due to the physical nature of their products these local distributors would still specialize in particular product lines – scaffolds for nano-materials still take up space. They may even do so because they are official distributors for a particular brand of products and would carry out the ‘manufacturing’, distribution and marketing of the products to the local populace – tailoring the specifications to that market as only a regional producer can.

    I can see a future where intellectual property does play a large role in this… big brands create blueprints (patented/copyrighted of course) for various products and then license out the rights to modify/customize that blueprint for said regional markets. Big brands may even provide the startup capital for their franchises in the same way they do now for their storefronts.

    A typical shopping experience would be like going to a high end furniture store or auto dealership. You shop the floor models and pick out your base model, then select features (color/texture/material, optional sizes, style, etc) and place your order. For a large item it would be delivered to your home.. smaller items would be made while you wait. Obviously there would still be pre-fab’d units available for those in a hurry or whom don’t care for customization – these would be cheaper and would be what the production units churn out when not producing a custom product. Overruns might go off to someplace like Costco where you can buy in bulk and where there is warehouse space available.

    SO buying all kinds of products would be a much more personal experience and variations on products which currently can’t be made due to the need for consistent moulds and dies would be possible.

  • frelkins

    @Vassar

    fairly easy (e.g. a large but routine engineering project) with 10^22 FLOPS

    Just to ground this discussion a bit, I’m sure everyone knows that currently only IBM’s Roadrunner has been clocked for the full petaflop (10^15), altho’ its Blue Gene should “soon” be upgraded/upgradeable to run at 3 petaflops. (I might also watch Yoyotech, your friendly neighborhood supercomputer geeks, for this in the relatively near future, say less than 5 years.)

    The full yottaflop (10^24) despite what seems like impossible situation – larger than an office building! would take more power than the entire NYC FiDi! or whatever – may however be practically conceivable, if those folks at Evolved Machines are really onto something.

  • Pingback: Local production (not always)