25 Comments

Surely what I am about to write is obvious, and probably old. During World War II, when physicists began to realize the destructive potential of nuclear weapons, Albert Einstein was chosen by his peers to approach President Roosevelt. Einstein was perhaps not the best informed of the group, but he was the best known, and was thought to be able to get Roosevelt's ear, as he did. In response, Roosevelt was able to convene all the greatest Western minds in physics, mathematics, and engineering to work together for a rapid solution to the problem. Clearly, the importance of the development of recursively self-improving super-human intelligence has got to be, almost by definition, greater than all other current problems, since it is the one project that would allow for the speedy solution of all other problems. Is there no famous person or persons in the field, able to organize his peers, and with access to the government such that an effort similar to the Manhattan Project could be accomplished? The AI Institute has one research fellow, and are looking for one more. They have a couple of fund-raisers, but most of the world is unaware of AI altogether. This won't get it done in a reasonable time-frame. Your competitors may well be backed by their governments.

While the eventual use of the Manhattan Project's discoveries is about as far from Friendly AI as imaginable, the power of super-human recursive AI is such that no matter by whom or where it is developed it will become the eminent domain of a government, much like the most powerful Cray computers. You might as well have their money and all the manpower right from the start, and the ability to influence it's proper use.

Can/will this be done?

Expand full comment

As I mentioned, one point of disanalogy between the farming/industrial developments and AI is that farming didn't put any humans out of work, while the humans put out of work by industry had other places in the economy to go. With AI, it effectively takes out most of the economy out of human hands, maybe leaving a few vacancies in the service industries.

Another disanalogy between the farming/industrial developments and AI is that is is hard to keep farming and industrial developments secret - they are typically too easy to reverse engineer. Whereas with AI, if you keep the code on your server, it is extremely difficult for anyone to reverse engineer it. It can even be deployed fairly securely in robots - if tamper-proof hardware is employed.

Both of these differences suggest that AI may be more effective at creating inequalities than either farming or industry was.

However, ultimately, whether groups of humans benefit differentially from AI or not probably makes little odds.

The bigger picture is that it represents the blossoming of the new replicators into physical minds and bodies - so there is a whole new population of non-human entities to consider, with computers for minds and databases for genomes.

Expand full comment

Singularities in principle don't seem that hard to model. Have people tried modeling how quickly an agent takes over a game (if it does) with the same analytical algorithms but quicker processing speed? Have they looked at how interdependency affects that? Robin, you have some interesting hypotheses that seem open to be tested in a variety of ways.

Expand full comment

I did not say "throw this out of your dataset, it is not relevant".

We do have more data to go on than just the recent economic successes of our ancestors.

Expand full comment

Tim you say "the relevant important developments are really the previous genetic takeovers", a phrase I now understand better, but the right way to do multivariate analysis is not to first choose the "right" data. Instead one collects as much relevant data as possible and then sees what statistical inference says about which data can in fact be ignored without changing the results much. Saying "throw this out of your dataset, it is not relevant" is less useful than saying "you've missed this relevant data, your conclusions will change when you include them."

Expand full comment

Genetic Takeover, is a concept from Genetic Takeover - and the mineral origins of life, A. G. Cairns-Smith, Cambridge University Press, 1982.

Here is a page by me on the topic:

http://originoflife.net/takeover/

It is not my suggestion that we are witnessing a modern Genetic Takeover:

"machines could carry on our cultural evolution, including their own increasingly rapid self-improvement, without us, and without the genes that built us. It will be then that our DNA will be out of a job, having passed the torch, and lost the race, to a new kind of competition. The genetic information carrier, in the new scheme of things, will be exclusively knowledge, passed from mind to artificial mind."

Human Culture - A Genetic Takeover Underway - Moravec, 1987

"Millions of years later, another change is under way in how information passes from generation to generation. Humans evolved from organisms defined almost totally by their organic genes. We now rely additionally on a vast and rapidly growing corpus of cultural information generated and stored outside our genes - in our nervous systems, libraries, and, most recently, computers.

Our culture still depends utterly on biological human beings, but with each passing year our machines, a major product of the culture, assume a greater role in its maintenance and continued growth. Sooner or later our machines will become knowledgeable enough to handle their own maintenance, reproduction and self-improvement without help. When this happens the new genetic takeover will be complete. [...]"

- Moravec, 1988

"Cultural evolution is many orders of magnitude faster than DNA-based evolution, which sets one even more to thinking of the idea of 'takeover'. And if a new kind of replicator takeover is beginning, it is conceivable that it will take off so far as to leave its parent DNA (and its grandparent clay if Cairns-Smith is right) far behind. If so, we may be sure that computers will be in the van."

- Dawkins, 1982.

Expand full comment

Perhaps you guys have addressed this elsewhere, but given that most evolution (technological, social, political...) seems to follow the jumping punctuated-equilibria model (cf. Thomas Kuhn, Joel Mokyr, McLuhan...), how do you separate the wheat from the chaff? The eternal behaviorist dilemma applies here.

Expand full comment

Tim virtually every innovation is an "idea." You seem to be saying the relevant category to use for an outside view is "genetic takeovers", but since you are using "genetic" metaphorically I find this category hard to understand. Please try to be more precise so we can evaluate your suggestion. It is true that per-capita wealth inequality across the world is at an all time high, but this is mainly because the wealth peaks are at an all time high, while the valleys remain at their lowest feasible level.

Expand full comment

Re: "until there is a substantial space or deep Earth economy/ecology all transitions will spread "horizontally.""

The idea of horizontal transmission here was to illustrate that farming and agriculture were heritable *ideas*, and may well have practically wiped out theother *ideas* that they competed with.

AI is also an idea, and one that is capable of spreading rapidly - but unlike farming and agriculture it is a replacement technology for an important DNA-based adaptation: brains. Rather than competing only with other ideas, it will more effectively compete with humans themselves - in conjunction with various associated developments in sensors and actuators, of course.

Re: "what the outside view doesn't favor". I see what you are saying - I just think it's nonsense. The idea of looking at previous important developments, and trying to use them to see into the future is a good one, but the relevant important developments are really the previous genetic takeovers. Agricultureand industry transitions throw only very limited light on AI. It's like tryingto predict the properties of neutron stars by looking at gold and lead.

The technology advances we have seen so far tend to increase inequalities - by allowing wealth and power to be concentrated. Inequalities are greater now than ever before - with celebrities earning billions of dollars while much of the world is on the bread line. Further technological progress seems extremely likely to widen this gap.

Expand full comment

Tim, until there is a substantial space or deep Earth economy/ecology all transitions will spread "horizontally." Being "wiped out" is the sort of transition inequality I'm saying the outside view doesn't favor.

Steven, you are repeating the standard argument inside viewers give against outside views, that it neglects crucial info.

Expand full comment

By comparing the origin of multicellularity, the origin of human brains, the origin of farming, and the origin of industry, we conclude that hypothetical first movers in these transitions gained progressively less from them?

It all seems pretty vague to me. Industry and farming spread horizontally, so you wouldn't /expect/ the DNA of their owners to benefit in the first place - rather the associated ideas are what spreads - at the expense of other ideas about how to live.

Anyway, the conclusion seems to be that the inventors of AI will enjoy few special benefits, and not turn into future versions of Bill Gates. That seems fair enough:; if they get incredibly rich and powerful, it probably won't last for long. They will soon enough get wiped out by vastly superior technology... along with all the rest of the ancient, crappy, unmodified humans.

Expand full comment

<quote>Grant, the question here is exactly what odds we should give to an AI transition allowing a small part to take over the world or destroy the human race. I'm saying an outside view gives low odds; you are apparently estimating high odds based on an inside view.</quote>I wasn't giving any odds at all, I was just pointing out that a large number of people would fear AGI more than say, irrigation or coal mines. It seems to me that fear would manifest itself somehow, and alter the way in which the singularity unfolds.

Suppose it takes one generation for a skeptical nation to overcome an irrational fear of AGI. One generation is nothing to agriculture, little to industry, and annoying to IT (creating a significant gap between older computer illiterates and the younger generation). What would it mean in the time-frame of AGI? It seems to me that unlike previous revolutions, AGI could itself advance faster than many societies could politically and culturally adopt it, meaning a few would be given huge advantages over many.

Expand full comment

I don't accept the dichotomy of blind trend-extrapolation (outside view) vs making up detailed stories about how it might happen (inside view). Theoretical non-story arguments like the various human/computer differences seem to me to make a hard takeoff plausible, and trend-extrapolation seems to me to give only evidence that's 1) weak and 2) causally distant (uninformative conditional on more specific knowledge).

Robin's points 2 and 3 don't apply if a basement AI doesn't need to share information or depend on other thinkers.

Expand full comment

Abraham Lincoln had a view on what would trigger the next singularity:

http://brokensymmetry.typep...

Expand full comment

Robin: "An outside view thus suggests only a moderate amount of inequality in the next singularity - nothing like a basement AI taking over the world."

Steven: "For every reason you can give me why transhuman AI is special among historical events, I can give you a reason why a transhuman-AI-caused growth mode transition is special among growth mode transitions. By throwing away enough information you can half-prove anything."

There are a lot of superintelligence scenarios that would fit in with Robin's prediction: gradual improvement in cognitive enhancement technologies, whole-brain emulation technology and copyable uploads being the two most obvious. Very slow takeoff recursively self improving AGI might also fit in with this - if an AGI gets smarter quite slowly, e.g. with a timescale of ~ 1 year to go from "average human" to "most intelligent human on the planet" intelligence level.

with no extra information to go on, one would have to conclude that Robin is probably right, with probability 3/4. (Since 3 out of 4 of the proposed superintelligence pathways seem to follow the standard pattern)

However, I don't think that the last scenario - fast recursively improving AGI - fits with any of the previous events or patterns. Two changes - a change of substrate (biology to silicon) and fixed to recursively improving intelligence happening at the same time seem to me to be a more profound change than any of the other singularities.

Expand full comment

Steve, the "outside view" does not specifically predict transhuman AI as the driver for the (next) singularity. It merely predicts that it must be something that can cause the economy to double in size every couple of weeks or so. And this would imply that it must be a "meta innovation", something that speeds up virtually everything about human society. Transhuman AI, or at least some form of super-intelligence, is merely the most plausible (or perhaps only) candidate on the drawing boards that might accomplish this.

Expand full comment