23 Comments

@Vassar

"fairly easy (e.g. a large but routine engineering project) with 10^22 FLOPS"

Just to ground this discussion a bit, I'm sure everyone knows that currently only IBM's Roadrunner has been clocked for the full petaflop (10^15), altho' its Blue Gene should "soon" be upgraded/upgradeable to run at 3 petaflops. (I might also watch Yoyotech, your friendly neighborhood supercomputer geeks, for this in the relatively near future, say less than 5 years.)

The full yottaflop (10^24) despite what seems like impossible situation - larger than an office building! would take more power than the entire NYC FiDi! or whatever - may however be practically conceivable, if those folks at Evolved Machines are really onto something.

Expand full comment

Large Scale manufacturers will always have an advantage with commodity products. What's not clear about the future is what Large Scale means. Currently it means centralized manufacturing but in the future we may see more advanced distributed manufacturing.

Yes, we already distribute our manufacturing, parts are made in one location and shipped off to another for assembly, then drop shipped or warehoused then shipped off to local merchants. With nanotech capabilities and sufficient AI or even without it, we should be able to make this process even more distributed.

Rather than shipping parts to assembly houses we would be shipping raw nano-materials (the new 'parts') to local 'distribution' houses which would only need enough space for a few large production units. Due to the physical nature of their products these local distributors would still specialize in particular product lines - scaffolds for nano-materials still take up space. They may even do so because they are official distributors for a particular brand of products and would carry out the 'manufacturing', distribution and marketing of the products to the local populace - tailoring the specifications to that market as only a regional producer can.

I can see a future where intellectual property does play a large role in this... big brands create blueprints (patented/copyrighted of course) for various products and then license out the rights to modify/customize that blueprint for said regional markets. Big brands may even provide the startup capital for their franchises in the same way they do now for their storefronts.

A typical shopping experience would be like going to a high end furniture store or auto dealership. You shop the floor models and pick out your base model, then select features (color/texture/material, optional sizes, style, etc) and place your order. For a large item it would be delivered to your home.. smaller items would be made while you wait. Obviously there would still be pre-fab'd units available for those in a hurry or whom don't care for customization - these would be cheaper and would be what the production units churn out when not producing a custom product. Overruns might go off to someplace like Costco where you can buy in bulk and where there is warehouse space available.

SO buying all kinds of products would be a much more personal experience and variations on products which currently can't be made due to the need for consistent moulds and dies would be possible.

Expand full comment

Eliezer, it might take more than a few mail order proteins to take over the world.

Viruses are proteins.

Expand full comment

To clarify the above, at a best guess human equivalent AGI is probably impossible with 10^4 FLOPS humanly impossible with 10^10 FLOPS and fairly easy (e.g. a large but routine engineering project) with 10^22 FLOPS and trivial with 10^28 FLOPS. When I say supersaturated I mean relative to the likely efficiency and power of AGI that might come out of revolutionary new understanding of cognition, for which I consider Moravek's and Kurzweil's estimates to be highly reasonable.

Expand full comment

william newman: Nice analysis. I'm glad to have noticed your posts and would be happy to follow up if you email michael aruna at yahoo dot you guess. Anyway, I roughly agree with your sense that we are about 1-2 decades from supersaturation, but based on what I see from AI research I definitely don't expect to see AGI on that time frame. I'd be interested in your reasoning WRT expectations, because nothing I'm seeing today seems all that much more impressive, on the software front, from what we had 20 years ago, and Laplace's law of induction tells me that this implies that I should give at least a 2/3 chance to nothing staggering on the software front in the next 20 years.

Expand full comment

Isn't the question "how to structure our plants efficiently" simply irrelevant given full scale nanotechnology? It's like "where to put my only mp3 file so that the most people can listen to it". Isn't full scale about eliminating scarcity? And a fridge-size fab is probably too big... even evolution could build much smaller self-replicating machines.

Non-full scale "nano" (which is not fully self-replicating) is not a new thing, see the big silicon fabs. (The self-replicating parts of today's computers are called "software".)

And I wouldn't worry about watching over big fabs just in case a superhuman AI wants to do something nasty... I'm sure it would be more creative than that.

Expand full comment

"because I now think that evolution wasn't all that clever with our brain design, and that the 100Hz serial speed limit on our neurons has to be having all sorts of atrocious effects on algorithmic efficiency."

I think you might be right. Also, brains were not invented to think, in the first place.

To me, it all really seems to be the problem of software. We are seeking for "gods algorithm". My guts feeling is that it will something relatively simple, I bet when somebody finaly finds it, we will all wonder why that have not happened much sooner..

Expand full comment

"James, I'm not following you. Larger centralized shipyards should contain more potential whistleblowers."

Let's say you need 1/3 of the industrial capacity to build a secret ship. If there are three big yards you will need just one of them, so you just have to get the top managers of this plant to keep the secret. But if there are 3,000 ship yards (of equal size) then you would need to have the managers of 1,000 ship yards keep the secret, something that will be very difficult.

Also, there will be some variance in the cultures of shipyards in terms of how good they are at keeping secrets. So the more shipyards that are in on the secret the more likely it is that you will have one that doesn't succeed in keeping the secret.

Let's say a government designed weapon could either be built by one company or 1,000 independent scientists who collaborate. Wouldn't it be easier to keep the weapon a secret if it were built by just the one company?

If you just need one shipyard regardless of its size to build the ship then you would be right.

Expand full comment

Ergh, just realized that I didn't do a post discussing the bogosity of "human-equivalent computing power" calculations. Well, here's a start in a quick comment - Moravec, in 1988, used Moore's Law to calculate how much power we'd have in 2008. He more or less nailed it. He spent a lot of pages justifying the idea that Moore's Law could continue, but from our perspective that seems more or less prosaic.

Moravec spent fewer pages than he did on Moore's Law, justifying his calculation that the supercomputers we would have in 2008 would be "human-equivalent brainpower".

Did Moravec nail that as well? Given the sad state of AI theory, we actually have no evidence against it. But personally, I suspect that he overshot; I suspect that one could build a mind of formidability roughly comparable to human on a modern-day desktop computer, or maybe even a desktop computer from 1996; because I now think that evolution wasn't all that clever with our brain design, and that the 100Hz serial speed limit on our neurons has to be having all sorts of atrocious effects on algorithmic efficiency. If it was a superintelligence doing the design, you could probably have roughly-human-formidability on something substantially smaller.

Just a very rough eyeball estimate, no real numbers behind it.

Expand full comment

We have fairly a good idea about roughly what molecular manufacturing will look like initially - in terms of the spatial distribution of its outputs - since we have the similar examples of printing and rapid prototyping to work from.

Printers currently sit on our desks - but if we want a book or a magazine, we usually go to the shops.

Expand full comment

Robin: Once you grant the premise of a computer so brilliant at protein design that it can crank out hundreds or thousands of protein designs that it wants synthesized, then I think you should grant that it might only need to mail order one protein. E.g., design and mail-order a gene which you splice into a bacteriophage, after which the bacteriophage synthesizes a plasmid base by base as you direct by toggling the color of ambient light and/or ambient temperature. Once you have such a phage, you should only need a few LEDs, heater filaments, and 1988-tech-level microcontrollers in order to cobble together more synthetic capability (with feedstock = simple nutrient broth) than you could possibly need for bootstrapping.

However, I don't think the premise of such an affordable technologically-brilliant AI leapfrogging the rest of the world is very likely. And I think before it becomes likely, we should have some warning. (Chiefly, growing wonder at how cheap it has become to buy compute servers much more powerful than the human brain, and growing frustration at still being completely creatively blocked on getting them to think.)

Given inspired AI design (probably helped by the AI itself), how much computer hardware oomph would be required to leapfrog current technology so thoroughly that a homebrew basement fab could go into double-every-two-weeks mode while the rest of the world slogs on doubling every twenty years? My guess (with cosmology-style error bars in the order of magnitude) is as much computation as 1000 Einstein or Edison brains running for a year. Right now that's a seriously expensive amount of hardware. For some years it will remain a pretty expensive amount of hardware. Moore's Law does seem to be on track to make it affordable, so if we lag enough in figuring out how to use the affordable hardware, then in a decade we might be edging into a supersaturated situation where someone's basement tinkering could go FOOM, and maybe in two decades the possibility will be constantly weighing on my mind. But my guess is that before we reach that that kind of supersaturation, we'll have so much AI in so many places that a one-basement event won't have a chance to stand out, because the world will already be recognizably into singularity, with weirdness doubling on a timescale of weeks. (Or will have dodged the singularity by WWIII or something.)

Note that one Buffett*5 brain year is probably worth many billions of dollars. Also, having a Bonaparte*5 brain on staff is probably a big force multiplier for military preparations which cost many, many billions of dollars. Thus, well before an under-the-radar organization can afford the computer power to revolutionize the world in their basement, huge organizations are likely to find it cost-effective to spend billions of dollars per genius-brain-year-equivalent for this class of computer.

It could also be cost-effective to pay that rate for scientific/engineering AI supergenius --- I can easily believe that an Edison*5 brain year could be worth many billions of dollars if targeted at the right low-hanging fruit. But I doubt the low-hanging fruit will end up turning industrial production tradeoffs inside out so that all of the next wave of technology emerges from a single basement somewhere.

I think if someone wants to worry about One Basement To Rule Them All, a more plausible risk is that the low-hanging fruit happen to be technologies that turn security tradeoffs upside down, so that the world security situation is suddenly very fragile. What if the lowest-hanging fruit is a novel way of repurposing artificial cochlea technology, rewiring people's pleasure centers (or whatever) in such a way so that a fifteen-minute outpatient procedure, less difficult than lasik, completely rewrites anyone's loyalties? Or an insanely effective biological weapon --- uncountable varieties of anthrax on steroids, each with its corresponding cheap effective antiserum for use by your own troops and loyalists, and with various weaponization practicalities (dispersal as ultrafine aerosol, e.g.) solved by the organism itself? It is a bad thing if the obvious way to use a new development is to jump into total war against the rest of mankind, with a decent chance of winning. But that's not particularly a problem of AI, as far as I can see. We'd face very much the same problem if such destabilizing weaponizable technologies (and not their countermeasures) turned out to be the low-hanging fruit for ordinary no-AI-assistance technological advance.

Expand full comment

Eliezer, I guess I'm just highlighting the extreme degree of intelligence postulated, that this week old box that has made no visible outside mark beyond mail-ordering a few proteins knows enough to use those proteins to build a physically-small manufacturing industry that is more powerful than the entire rest of the world.

Expand full comment

James, I'm not following you. Larger centralized shipyards should contain more potential whistleblowers.

Robin, why does it realistically take more than a few mail order proteins to take over the world? Ribosomes are reasonably general molecular factories and quite capable of self-replication to boot.

Expand full comment

Robin,

You are right if (a) the monitor is the government and (b) you need only one yard regardless of the yard's size. But what if (1# the primary chance of disclosure comes from the press or from an inside whistleblower and #2# you need a certain percentage of industry capacity? In this case detection is more likely with diffuse production.

Also, if it is the military trying to secretly build ships #and there were no monitors that had the legal right to inspect) then it would be easier if there were a small number of big yards.

The chance of a secret being kept decreases exponentially as the number of people in on the secret increases.

Expand full comment

Eliezer, it might take more than a few mail order proteins to take over the world.

James, if there are only three shipyards capable of assembling state-of-the-art ships, then one need only monitor those three yards to detect new ships. If there are thousands of top shipyards, however, monitoring will be lots harder.

Expand full comment

Remember what you wrote about administrative overheads? Many people who try to actually do things are often frustrated by the lousy quality and availability of tools to do it with. I don't expect small nanofactories to become common for a very long time, but larger "home shop" type machines that possibly use some specialized nano components are quite likely when they become possible for custom work and prototyping.

In fact, I agree with an aticle I read about 8 or ten years ago, nanofactories will be just as specialized in the beginning as ordinary factories. The fully general assemblers are quite a bit longer term.

Expand full comment