Category Archives: AI

What Core Argument?

People keep asking me to return to the core of the argument, but, well, there's just not much there.  Let's review, again.  Eliezer suggests someone soon may come up with a seed AI architecture allowing a single AI to within roughly a week grow from unimportant to strong enough to take over the world.  I'd guess we are talking over 20 orders of magnitude growth in its capability, or 60 doublings.  

This amazing growth rate sustained over such a large magnitude range is far beyond what the vast majority of AI researchers, growth economists, or most any other specialists would estimate.  It is also far beyond estimates suggested by the usual choices of historical analogs or trends.  Eliezer says the right reference set has two other elements, the origin of life and the origin of human minds, but why should we accept this reference?  He also has a math story to suggest this high average growth, but I've said:

I also find Eliezer's growth math unpersuasive. Usually dozens of relevant factors are co-evolving, with several loops of all else equal X growth speeds Y growth speeds etc. Yet usually it all adds up to exponential growth, with rare jumps to faster growth rates. Sure if you pick two things that plausibly speed each other and leave everything else out including diminishing returns your math can suggest accelerating growth to infinity, but for a real foom that loop needs to be real strong, much stronger than contrary muting effects.

Eliezer has some story about how chimp vs. human brain sizes shows that mind design doesn't suffer diminishing returns or low-hanging-fruit-first slowdowns, but I have yet to comprehend this argument.  Eliezer says it is a myth that chip developers need the latest chips to improve chips as fast as they do, so there aren't really diminishing returns there, but chip expert Jed Harris seems to disagree.

Continue reading "What Core Argument?" »

GD Star Rating
loading...
Tagged as:

Two Visions Of Heritage

Eliezer and I seem to disagree on our heritage.

I see our main heritage from the past as all the innovations embodied in the design of biological cells/bodies, of human minds, and of the processes/habits of our hunting, farming, and industrial economies.  These innovations are mostly steadily accumulating modular "content" within our architectures, produced via competitive processes and implicitly containing both beliefs and values.  Architectures also change at times as well.

Since older heritage levels grow more slowly, we switch when possible to rely on newer heritage levels.  For example, we once replaced hunting processes with farming processes, and within the next century we may switch from bio to industrial mental hardware, becoming ems.  We would then rely far less on bio and hunting/farm heritages, though still lots on mind and industry heritages.  Later we could make AIs by transferring mind content to new mind architectures.  As our heritages continued to accumulate, our beliefs and values should continue to change. 

I see the heritage we will pass to the future as mostly avoiding disasters to preserve and add to these accumulated contents.  We might get lucky and pass on an architectural change or two as well.  As ems we can avoid our bio death heritage, allowing some of us to continue on as ancients living on the margins of far future worlds, personally becoming a heritage to the future.

Continue reading "Two Visions Of Heritage" »

GD Star Rating
loading...
Tagged as: , ,

Are AIs Homo Economicus?

Eliezer yesterday:

If I had to pinpoint a single thing that strikes me as “disagree-able” about the way Robin frames his analyses, it’s that there are a lot of opaque agents running around, little black boxes assumed to be similar to humans, but there are more of them and they’re less expensive to build/teach/run.  … The core of my argument has to do with what happens when you pry open the black boxes that are your economic agents, and start fiddling with their brain designs, and leave the tiny human dot in mind design space.

Lots of folks complain about economists; believers in peak oil, the gold standard, recycling, electric cars, rent control, minimum wages, tariffs, and bans on all sorts of things complain about contrary economic analyzes.  Since compared to most social scientists economists use relatively stark mathy models, the usual complaint is that our models neglect relevant factors, and make false assumptions.

But of course we must neglect most everything, and make false assumptions, to have tractable models; the question in each context is what neglected factors and false assumptions would most mislead us.

It is odd to hear complaints that economic models assume too much humanity; the usual complaint is the opposite.  Unless physicists have reasons to assume otherwise, they usually assume masses are at points, structures are rigid, surfaces are frictionless, and densities are uniform.  Similarly, unless economists have reasons to be more realistic in a context, they usually assume people are identical, risk-neutral, live forever, have selfish material stable desires, know everything, make no mental mistakes, and perfectly enforce every deal.  Products usually last one period or forever, are identical or infinitely varied, etc.

Continue reading "Are AIs Homo Economicus?" »

GD Star Rating
loading...
Tagged as: ,

True Sources of Disagreement

Followup toIs That Your True Rejection?

I expected from the beginning, that the difficult part of two rationalists reconciling a persistent disagreement, would be for them to expose the true sources of their beliefs.

One suspects that this will only work if each party takes responsibility for their own end; it’s very hard to see inside someone else’s head.  Yesterday I exhausted myself mentally while out on my daily walk, asking myself the Question "What do you think you know, and why do you think you know it?" with respect to "How much of the AI problem compresses to large insights, and how much of it is unavoidable nitty-gritty?"  Trying to either understand why my brain believed what it believed, or else force my brain to experience enough genuine doubt that I could reconsider the question and arrive at a real justification that way.  It’s hard to see how Robin Hanson could have done any of this work for me.

Presumably a symmetrical fact holds about my lack of access to the real reasons why Robin believes what he believes.  To understand the true source of a disagreement, you have to know why both sides believe what they believe – one reason why disagreements are hard to resolve.

Nonetheless, here’s my guess as to what this Disagreement is about:

Continue reading "True Sources of Disagreement" »

GD Star Rating
loading...

Wrapping Up

This Friendly AI discussion has taken more time than I planned or have.  So let me start to wrap up.

On small scales we humans evolved to cooperate via various pair and group bonding mechanisms.  But these mechanisms aren’t of much use on today’s evolutionarily-unprecedented large scales.  Yet we do in fact cooperate on the largest scales.  We do this because we are risk averse, because our values mainly conflict on resource use which conflicts destroy, and because we have the intelligence and institutions to enforce win-win deals via property rights, etc.

I raise my kids because they share my values.  I teach other kids because I’m paid to.  Folks raise horses because others pay them for horses, expecting horses to cooperate as slaves.  You might expect your pit bulls to cooperate, but we should only let you raise pit bulls if you can pay enough damages if they hurt your neighbors.

In my preferred em (whole brain emulation) scenario, people would only authorize making em copies using borrowed or rented brains/bodies when they expected those copies to have lives worth living.  With property rights enforced, both sides would expect to benefit more when copying was allowed.  Ems would not exterminate humans mainly because that would threaten the institutions ems use to keep peace with each other.

Similarly, we expect AI developers to plan to benefit from AI cooperation, via either direct control, indirect control such as via property rights institutions, or such creatures having cooperative values.  As with pit bulls, developers should have to show an ability, perhaps via insurance, to pay plausible hurt amounts if their creations hurt others.  To the extent they or their insurers fear such hurt, they would test for various hurt scenarios, slowing development as needed in support.  To the extent they feared inequality from some developers succeeding first, they could exchange shares, or share certain kinds of info.  Naturally-occurring info-leaks, and shared sources, both encouraged by shared standards, would limit this inequality.

Continue reading "Wrapping Up" »

GD Star Rating
loading...
Tagged as: , , ,

Artificial Mysterious Intelligence

Previously in seriesFailure By Affective Analogy

I once had a conversation that I still remember for its sheer, purified archetypicality.  This was a nontechnical guy, but pieces of this dialog have also appeared in conversations I’ve had with professional AIfolk

Him:  Oh, you’re working on AI!  Are you using neural networks?

Me:  I think emphatically not.

Him:  But neural networks are so wonderful!  They solve problems and we don’t have any idea how they do it!

Me:  If you are ignorant of a phenomenon, that is a fact about your state of mind, not a fact about the phenomenon itself.  Therefore your ignorance of how neural networks are solving a specific problem, cannot be responsible for making them work better.

Him:  Huh?

Me:  If you don’t know how your AI works, that is not good.  It is bad.

Him:  Well, intelligence is much too difficult for us to understand, so we need to find some way to build AI without understanding how it works.

Continue reading "Artificial Mysterious Intelligence" »

GD Star Rating
loading...

Friendly Projects vs. Products

I’m a big board game fan, and my favorite these days is Imperial.   Imperial looks superficially like the classic strategy-intense war game Diplomacy, but with a crucial difference:  instead of playing a nation trying to win WWI, you play a banker trying to make money from that situation.  If a nation you control (by having loaned it the most) is threatened by another nation, you might indeed fight a war, but you might instead just buy control of that nation.  This is a great way to mute conflicts in a modern economy: have conflicting groups buy shares in each other.

For projects to create new creatures, such as ems or AIs, there are two distinct friendliness issues: 

Project Friendliness  Will the race make winners and losers, and how will winners treat losers? While any race might be treated as part of a total war on several sides, usually the inequality created by the race is moderate and tolerable.  For larger inequalities, projects can explicitly join together, agree to cooperate in weaker ways such as by sharing information, or they can buy shares in each other.  Naturally arising info leaks and shared standards may also reduce inequality even without intentional cooperation.  The main reason for failure here would seem to be the sorts of distrust that plague all human cooperation.

Product Friendliness  Will the creatures cooperate with or rebel against their creators?  Folks running a project have reasonably strong incentives to avoid this problem.  Of course for the case of extremely destructive creatures the project might internalize more of the gains from cooperative creatures than they do the losses from rebellious creatures.  So there might be some grounds for wider regulation.  But the main reason for failure here would seem to be poor judgment, thinking you had your creatures more surely under control than in fact you did. 

It hasn’t been that clear to me which of these is the main concern re "friendly AI." 

Added:  Since Eliezer says product friendliness is his main concern, let me note that the main problem there is the tails of the distribution of bias among project leaders.  If all projects agreed the problem was very serious they would take near appropriate caution to isolate their creatures, test creature values, and slow creature development enough to track progress sufficiently.  Designing and advertising a solution is one approach to reducing this bias, but it need not need the best approach; perhaps institutions like prediction markets that aggregate info and congeal a believable consensus would be more effective. 

GD Star Rating
loading...
Tagged as:

Sustained Strong Recursion

Followup toCascades, Cycles, Insight, Recursion, Magic

We seem to have a sticking point at the concept of "recursion", so I’ll zoom in.

You have a friend who, even though he makes plenty of money, just spends all that money every month.  You try to persuade your friend to invest a little – making valiant attempts to explain the wonders of compound interest by pointing to analogous processes in nature, like fission chain reactions.

"All right," says your friend, and buys a ten-year bond for $10,000, with an annual coupon of $500.  Then he sits back, satisfied.  "There!" he says.  "Now I’ll have an extra $500 to spend every year, without my needing to do any work!  And when the bond comes due, I’ll just roll it over, so this can go on indefinitely.  Surely, now I’m taking advantage of the power of recursion!"

"Um, no," you say.  "That’s not exactly what I had in mind when I talked about ‘recursion’."

"But I used some of my cumulative money earned, to increase my very earning rate," your friend points out, quite logically.  "If that’s not ‘recursion’, what is?  My earning power has been ‘folded in on itself’, just like you talked about!"

"Well," you say, "not exactly.  Before, you were earning $100,000 per year, so your cumulative earnings went as 100000 * t.  Now, your cumulative earnings are going as 100500 * t.  That’s not really much of a change.  What we want is for your cumulative earnings to go as B * e^At for some constants A and B – to grow exponentially."

"Exponentially!" says your friend, shocked.

"Yes," you say, "recursification has an amazing power to transform growth curves.  In this case, it can turn a linear process into an exponential one.  But to get that effect, you have to reinvest the coupon payments you get on your bonds – or at least reinvest some of them, instead of just spending them all.  And you must be able to do this over and over again.  Only then will you get the ‘folding in’ transformation, so that instead of your cumulative earnings going as y = F(t) = A*t, your earnings will go as the differential equation dy/dt = F(y) = A*y whose solution is y = e^(A*t)."

Continue reading "Sustained Strong Recursion" »

GD Star Rating
loading...

Permitted Possibilities, & Locality

Continuation ofHard Takeoff

The analysis given in the last two days permits more than one possible AI trajectory:

  1. Programmers, smarter than evolution at finding tricks that work, but operating without fundamental insight or with only partial insight, create a mind that is dumber than the researchers but performs lower-quality operations much faster.  This mind reaches k > 1, cascades up to the level of a very smart human, itself achieves insight into intelligence, and undergoes the really fast part of the FOOM, to superintelligence.  This would be the major nightmare scenario for the origin of an unFriendly AI.
  2. Programmers operating with partial insight, create a mind that performs a number of tasks very well, but can’t really handle self-modification let alone AI theory.  A mind like this might progress with something like smoothness, pushed along by the researchers rather than itself, even all the way up to average-human capability – not having the insight into its own workings to push itself any further.  We also suppose that the mind is either already using huge amounts of available hardware, or scales very poorly, so it cannot go FOOM just as a result of adding a hundred times as much hardware.  This scenario seems less likely to my eyes, but it is not ruled out by any effect I can see.
  3. Programmers operating with strong insight into intelligence, directly create along an efficient and planned pathway, a mind capable of modifying itself with deterministic precision – provably correct or provably noncatastrophic self-modifications.  This is the only way I can see to achieve narrow enough targeting to create a Friendly AI.  The "natural" trajectory of such an agent would be slowed by the requirements of precision, and sped up by the presence of insight; but because this is a Friendly AI, notions like "You can’t yet improve yourself this far, your goal system isn’t verified enough" would play a role.

So these are some things that I think are permitted to happen, albeit that case 2 would count as a hit against me to some degree because it does seem unlikely.

Here are some things that shouldn’t happen, on my analysis:

Continue reading "Permitted Possibilities, & Locality" »

GD Star Rating
loading...

Hard Takeoff

Continuation ofRecursive Self-Improvement

Constant natural selection pressure, operating on the genes of the hominid line, produced improvement in brains over time that seems to have been, roughly, linear or accelerating; the operation of constant human brains on a pool of knowledge seems to have produced returns that are, very roughly, exponential or superexponential.  (Robin proposes that human progress is well-characterized as a series of exponential modes with diminishing doubling times.)

Recursive self-improvement – an AI rewriting its own cognitive algorithms – identifies the object level of the AI with a force acting on the metacognitive level; it "closes the loop" or "folds the graph in on itself".  E.g. the difference between returns on a constant investment in a bond, and reinvesting the returns into purchasing further bonds, is the difference between the equations y = f(t) = m*t, and dy/dt = f(y) = m*y whose solution is the compound interest exponential, y = e^(m*t).

When you fold a whole chain of differential equations in on itself like this, it should either peter out rapidly as improvements fail to yield further improvements, or else go FOOM.  An exactly right law of diminishing returns that lets the system fly through the soft takeoff keyhole is unlikely – far more unlikely than seeing such behavior in a system with a roughly-constant underlying optimizer, like evolution improving brains, or human brains improving technology.  Our present life is no good indicator of things to come.

Or to try and compress it down to a slogan that fits on a T-Shirt – not that I’m saying this is a good idea – "Moore’s Law is exponential now; it would be really odd if it stayed exponential with the improving computers doing the research."  I’m not saying you literally get dy/dt = e^y that goes to infinity after finite time – and hardware improvement is in some ways the least interesting factor here – but should we really see the same curve we do now?

RSI is the biggest, most interesting, hardest-to-analyze, sharpest break-with-the-past contributing to the notion of a "hard takeoff" aka "AI go FOOM", but it’s nowhere near being the only such factor.  The advent of human intelligence was a discontinuity with the past even without RSI…

Continue reading "Hard Takeoff" »

GD Star Rating
loading...