Category Archives: AI

Recursive Self-Improvement

Followup toLife’s Story Continues, Surprised by Brains, Cascades, Cycles, Insight, Recursion, Magic, Engelbart: Insufficiently Recursive, Total Nano Domination

I think that at some point in the development of Artificial Intelligence, we are likely to see a fast, local increase in capability – "AI go FOOM".  Just to be clear on the claim, "fast" means on a timescale of weeks or hours rather than years or decades; and "FOOM" means way the hell smarter than anything else around, capable of delivering in short time periods technological advancements that would take humans decades, probably including full-scale molecular nanotechnology (that it gets by e.g. ordering custom proteins over the Internet with 72-hour turnaround time).  Not, "ooh, it’s a little Einstein but it doesn’t have any robot hands, how cute".

Most people who object to this scenario, object to the "fast" part. Robin Hanson objected to the "local" part.  I’ll try to handle both, though not all in one shot today.

We are setting forth to analyze the developmental velocity of an Artificial Intelligence.  We’ll break down this velocity into optimization slope, optimization resources, and optimization efficiency.  We’ll need to understand cascades, cycles, insight and recursion; and we’ll stratify our recursive levels into the metacognitive, cognitive, metaknowledge, knowledge, and object level.

Quick review:

Continue reading "Recursive Self-Improvement" »

GD Star Rating
loading...

I Heart CYC

Eliezer Tuesday:

EURISKO may still be the most sophisticated self-improving AI ever built – in the 1980s, by Douglas Lenat before he started wasting his life on Cyc.  … EURISKO lacked what I called "insight" – that is, the type of abstract knowledge that lets humans fly through the search space. 

I commented:

You ignore that Lenat has his own theory which he gives as the reason he’s been pursuing CYC. You should at least explain why you think his theory wrong; I find his theory quite plausible.

Eliezer replied only:

Artificial Addition, The Nature of Logic, Truly Part of You, Words as Mental Paintbrush Handles, Detached Lever Fallacy

The main relevant points from these Eliezer posts seem to be that AI researchers wasted time on messy ad-hoc non-monotonic logics, while elegant mathy Bayes nets approaches work much better, that it is much better to know how to generate specific knowledge from general principles than to just be told lots of specific knowledge, and that our minds have lots of hidden machinery behind the words we use; words as "detached levers" won’t work.  But I doubt Lenat or CYC folks disagree with any of these points.

The lesson Lenat took from EURISKO is that architecture is overrated;  AIs learn slowly now mainly because they know so little.  So we need to explicitly code knowledge by hand until we have enough to build systems effective at asking  questions, reading, and learning for themselves.  Prior AI researchers were too comfortable starting every project over from scratch; they needed to join to create larger integrated knowledge bases.  This still seems to me a reasonable view, and anyone who thinks Lenat created the best AI system ever should consider seriously the lesson he thinks he learned.

Continue reading "I Heart CYC" »

GD Star Rating
loading...
Tagged as:

Stuck In Throat

Let me try again to summarize Eliezer’s position, as I understand it, and what about it seems hard to swallow.  I take Eliezer as saying: 

Sometime in the next few decades a human-level AI will probably be made by having a stupid AI make itself smarter.  Such a process starts very slow and quiet, but eventually "fooms" very fast and then loud. It is likely to go from much stupider to much smarter than humans in less than a week.  While stupid, it can be rather invisible to the world.  Once smart, it can suddenly and without warning take over the world. 

The reason an AI can foom so much faster than its society is that an AI can change its basic mental architecture, and humans can’t.  How long any one AI takes to do this depends crucially on its initial architecture.  Current architectures are so bad that an AI starting with them would take an eternity to foom.  Success will come from hard math-like (and Bayes-net-like) thinking that produces deep insights giving much better architectures.

A much smarter than human AI is basically impossible to contain or control; if it wants to it will take over the world, and then it will achieve whatever ends it has.  One should have little confidence that one knows what those ends are from its behavior as a much less than human AI (e.g., as part of some evolutionary competition).  Unless you have carefully proven that it wants what you think it wants, you have no idea what it wants. 

In such a situation, if one cannot prevent AI attempts by all others, then the only reasonable strategy is to try to be the first with a "friendly" AI, i.e., one where you really do know what it wants, and where what it wants is something carefully chosen to be as reasonable as possible. 

I don’t disagree with this last paragraph.  But I do have trouble swallowing prior ones.  The hardest to believe I think is that the AI will get smart so very rapidly, with a growth rate (e.g., doubling in an hour) so far out of proportion to prior growth rates, to what prior trends would suggest, and to what most other AI researchers I’ve talked to think.  The key issues come from this timescale being so much shorter than team lead times and reaction times.  This is the key point on which I await Eliezer’s more detailed arguments. 

Since I do accept that architectures can influence growth rates, I must also have trouble believing humans could find new AI architectures anytime soon that make this much difference.  Some other doubts: 

  • Does a single "smarts" parameter really summarize most of the capability of diverse AIs?
  • Could an AI’s creators see what it wants by slowing down its growth as it approaches human level?
  • Might faster brain emulations find it easier to track and manage an AI foom?
GD Star Rating
loading...
Tagged as: ,

Total Tech Wars

Eliezer Thursday:

Suppose … the first state to develop working researchers-on-a-chip, only has a one-day lead time. …  If there’s already full-scale nanotechnology around when this happens … in an hour … the ems may be able to upgrade themselves to a hundred thousand times human speed, … and in another hour, …  get the factor up to a million times human speed, and start working on intelligence enhancement. … One could, of course, voluntarily publish the improved-upload protocols to the world, and give everyone else a chance to join in.  But you’d have to trust that not a single one of your partners were holding back a trick that lets them run uploads at ten times your own maximum speed.

Carl Shulman Saturday and Monday:

I very much doubt that any U.S. or Chinese President who understood the issues would fail to nationalize a for-profit firm under those circumstances. … It’s also how a bunch of social democrats, or libertarians, or utilitarians, might run a project, knowing that a very likely alternative is the crack of a future dawn and burning the cosmic commons, with a lot of inequality in access to the future, and perhaps worse. Any state with a lead on bot development that can ensure the bot population is made up of nationalists or ideologues (who could monitor each other) could disarm the world’s dictatorships, solve collective action problems … [For] biological humans [to] retain their wealth as capital-holders in his scenario, ems must be obedient and controllable enough … But if such control is feasible, then a controlled em population being used to aggressively create a global singleton is also feasible.

Every new technology brings social disruption. While new techs (broadly conceived) tend to increase the total pie, some folks gain more than others, and some even lose overall.  The tech’s inventors may gain intellectual property, it may fit better with some forms of capital than others, and those who first foresee its implications may profit from compatible investments.  So any new tech can be framed as a conflict, between opponents in a race or war.

Every conflict can be framed as a total war. If you believe the other side is totally committed to total victory, that surrender is unacceptable, and that all interactions are zero-sum, you may conclude your side must never cooperate with them, nor tolerate much internal dissent or luxury.  All resources must be devoted to growing more resources and to fighting them in every possible way.

Continue reading "Total Tech Wars" »

GD Star Rating
loading...
Tagged as: , , , ,

Billion Dollar Bots

Robin presented a scenario in which whole brain emulations, or what he calls bots come into being.  Here is another:

Bots are created with hardware and software.  The higher the quality of one input the less you need of the other.  Hardware, especially with cloud computing, can be quickly allocated from one task to another.  So the first bot might run on hardware worth billions of dollars.

The first bot creators would receive tremendous prestige and a guaranteed place in the history books.  So once it becomes possible to create a bot many firms and rich individuals will be willing to create one even if doing so would cause them to suffer a large loss.

Imagine that some group has $300 million to spend on hardware and will use the money as soon as $300 million becomes enough to create a bot.  The best way to spend this money would not be to buy a $300 million computer but to rent $300 million of off-peak computing power.  If the group needed only 1,000 hours of computing power (which it need not buy all at once) to prove that it had created a bot then the group could have, roughly, $3 billion of hardware for the needed 1,000 hours.

It’s likely that the  first bot would run very slowly.  Perhaps it would take the bot 10 real seconds to think as much as a human does in one second.

Under my scenario the first bot would be wildly expensive.  But because of Moore’s law once the first bot was created everyone would expect that the cost of bots would eventually become low enough so that they would radically remake society.

Consequently, years before bots come to dominate the economy, many people will come to expect that within their lifetime bots will someday come to dominate the economy.   Bot expectations will radically change the world.

I suspect that after it becomes obvious that we could eventually create cheap bots world governments will devote trillions to bot Manhattan projects.  The expected benefits of winning the bot race will be so high that it would be in the self-interest of individual governments to not worry too much about bot friendliness.

The U.S. and Chinese militaries  might fall into a bot prisoners’ dilemma in which both militaries would prefer an outcome in which everyone slowed down bot development to ensure friendliness yet both nations were individually better off (regardless of what the other military did) taking huge chances on friendliness so as to increase the probability of their winning the bot race.

My hope is that the U.S. will have such a tremendous advantage over China that the Chinese don’t try to win the race and the U.S. military thinks it can afford to go slow.  But given China’s relatively high growth rate I doubt humanity will luck into this safe scenario.

GD Star Rating
loading...
Tagged as: ,

Emulations Go Foom

Let me consider the AI-foom issue by painting a (looong) picture of the AI scenario I understand best, whole brain emulations, which I’ll call “bots.”  Here goes.

When investors anticipate that a bot may be feasible soon, they will estimate their chances of creating bots of different levels of quality and cost, as a function of the date, funding, and strategy of their project.  A bot more expensive than any (speedup-adjusted) human wage is of little direct value, but exclusive rights to make a bot costing below most human wages would be worth many trillions of dollars.

It may well be socially cost-effective to start a bot-building project with a 1% chance of success when its cost falls to the trillion dollar level.  But not only would successful investors probably only gain a small fraction of this net social value, is unlikely any investor group able to direct a trillion could be convinced the project was feasible – there are just too many smart-looking idiots making crazy claims around.

But when the cost to try a 1% project fell below a billion dollars, dozens of groups would no doubt take a shot.  Even if they expected the first feasible bots to be very expensive, they might hope to bring that cost down quickly.  Even if copycats would likely profit more than they, such an enormous prize would still be very tempting.

The first priority for a bot project would be to create as much emulation fidelity as affordable, to achieve a functioning emulation, i.e., one you could talk to and so on.  Few investments today are allowed a decade of red ink, and so most bot projects would fail within a decade, their corpses warning others about what not to try.  Eventually, however, a project would succeed in making an emulation that is clearly sane and cooperative.

Continue reading "Emulations Go Foom" »

GD Star Rating
loading...
Tagged as: , ,

AI Go Foom

It seems to me that it is up to [Eliezer] to show us how his analysis, using his abstractions, convinces him that, more likely than it might otherwise seem, hand-coded AI will come soon and in the form of a single suddenly super-powerful AI.

As this didn’t prod a response, I guess it is up to me to summarize Eliezer’s argument as best I can, so I can then respond.  Here goes:

A machine intelligence can directly rewrite its entire source code, and redesign its entire physical hardware.  While human brains can in principle modify themselves arbitrarily, in practice our limited understanding of ourselves means we mainly only change ourselves by thinking new thoughts.   All else equal this means that machine brains have an advantage in improving themselves. 

A mind without arbitrary capacity limits, that focuses on improving itself, can probably do so indefinitely.  The growth rate of its "intelligence" may be slow when it is dumb, but gets faster as it gets smarter.  This growth rate also depends on how many parts of itself it can usefully change.  So all else equal, the growth rate of a machine intelligence must be greater than the growth rate of a human brain. 

No matter what its initial disadvantage, a system with a faster growth rate eventually wins.  So if the growth rate advantage is large enough then yes a single computer could well go in a few days from less than human intelligence to so smart it could take over the world.  QED.

So Eliezer, is this close enough to be worth my response?  If not, could you suggest something closer?

GD Star Rating
loading...
Tagged as:

Setting The Stage

As Eliezer and I begin to explore our differing views on singularity, perhaps I should summarize my current state of mind.   

We seem to agree that:

  1. Machine intelligence would be a development of almost unprecedented impact and risk, well worth considering now.
  2. Feasible approaches include direct hand-coding, based on a few big and lots of little insights, and emulations of real human brains. 
  3. Machine intelligence will more likely than not appear with a century, even if the progress rate to date does not strongly suggest the next few decades. 
  4. Many people say silly things here, and we do better to ignore them than to try to believe the opposite. 
  5. Math and deep insights (especially probability) can be powerful relative to trend-fitting and crude analogies. 
  6. Long term historical trends are suggestive of future events, but not strongly so.
  7. Some should be thinking about how to create "friendly" machine intelligences. 

We seem to disagree modestly about the relative chances of the emulation and direct-coding approaches; I think the first and he thinks the second is more likely to succeed first.  Our largest disagreement seems to be on the chances that a single hand-coded version will suddenly and without warning change from nearly powerless to overwhelmingly powerful; I’d put it as less than 1% and he seems to put it as over 10%. 

Continue reading "Setting The Stage" »

GD Star Rating
loading...
Tagged as: ,

Failure By Affective Analogy

Previously in seriesFailure By Analogy

Alchemy is a way of thinking that humans do not instinctively spot as stupid.  Otherwise alchemy would never have been popular, even in medieval days.  Turning lead into gold by mixing it with things that seemed similar to gold, sounded every bit as reasonable, back in the day, as trying to build a flying machine with flapping wings.  (And yes, it was worth trying once, but you should notice if Reality keeps saying "So what?")

And the final and most dangerous form of failure by analogy is to say a lot of nice things about X, which is similar to Y, so we should expect nice things of Y. You may also say horrible things about Z, which is the polar opposite of Y, so if Z is bad, Y should be good.

Call this "failure by affective analogy".

Continue reading "Failure By Affective Analogy" »

GD Star Rating
loading...

Failure By Analogy

Previously in seriesLogical or Connectionist AI?
Followup toSurface Analogies and Deep Causes

"One of [the Middle Ages’] characteristics was that ‘reasoning by analogy’ was rampant; another characteristic was almost total intellectual stagnation, and we now see why the two go together. A reason for mentioning this is to point out that, by developing a keen ear for unwarranted analogies, one can detect a lot of medieval thinking today."
        — Edsger W. Dijkstra

<geoff> neural nets are over-rated
<starglider> Their potential is overrated.
<geoff> their potential is us
        — #sl4

Wasn’t it in some sense reasonable to have high hopes of neural networks?  After all, they’re just like the human brain, which is also massively parallel, distributed, asynchronous, and –

Hold on.  Why not analogize to an earthworm’s brain, instead of a human’s?

Continue reading "Failure By Analogy" »

GD Star Rating
loading...