Monthly Archives: November 2008

“Evicting” brain emulations

Follow up to: Brain Emulation and Hard Takeoff

Suppose that Robin’s Crack of a Future Dawn scenario occurs: whole brain emulations (’ems’) are developed, diverse producers create ems of many different human brains, which are reproduced extensively until the marginal productivity of em labor approaches marginal cost, i.e. Malthusian near-subsistence wages. Ems that hold capital could use it to increase their wealth by investing, e.g. by creating improved ems and collecting the fruits of their increased productivity, by investing in hardware to rent to ems, or otherwise. However, an em would not be able to earn higher returns on its capital than any other investor, and ems with no capital would not be able to earn more than subsistence (including rental or licensing payments). In Robin’s preferred scenario, free ems would borrow or rent bodies, devoting their wages to rental costs, and would be subject to "eviction" or "repossession" for nonpayment.

In this intensely competitive environment, even small differences in productivity between em templates will result in great differences in market share, as an em template with higher productivity can outbid less productive templates for scarce hardware resources in the rental market, resulting in their "eviction" until the new template fully supplants them in the labor market. Initially, the flow of more productive templates and competitive niche exclusion might be driven by the scanning of additional brains with varying skills, abilities, temperament, and values, but later on em education and changes in productive skill profiles would matter more.

Continue reading "“Evicting” brain emulations" »

GD Star Rating
loading...
Tagged as: ,

Are you dreaming?

Often when I’m dreaming I “feel” that I’m awake.  When I’m awake, however, I always  “feel” that I’m awake and have no conscious doubt (except in the philosophical sense) that I’m not dreaming.

But logically when I “feel” awake I should believe there is a non-trivial chance that I’m dreaming.  This has implications for how I should behave.

For example, imagine I’m considering eating spinach or chocolate.  I like the taste of chocolate more than spinach, but recognize that spinach is healthier for me.  Let’s say that if the probability of my being awake were greater than 99% then to maximize the expected overall quality of my life I should eat the spinach otherwise I should pick the chocolate. 

Rationally, I should probably figure that the chance of my being awake is less than 99% so I should go with the chocolate.  Yet like most other humans I don’t take into account that I might be dreaming when I “feel” awake.

Over the long run you would likely reduce your inclusive genetic fitness if when you  “feel” awake you act as if there is a less than 100% chance of your actually being awake.  For this reason I suspect we are “genetically programmed” to never doubt that we are awake when we “feel” awake even though it would be rational to hold such a doubt.

GD Star Rating
loading...
Tagged as:

Surprised by Brains

Followup toLife’s Story Continues

Imagine two agents who’ve never seen an intelligence – including, somehow, themselves – but who’ve seen the rest of the universe up until now, arguing about what these newfangled "humans" with their "language" might be able to do…

Believer:  Previously, evolution has taken hundreds of thousands of years to create new complex adaptations with many working parts.  I believe that, thanks to brains and language, we may see a new era, an era of intelligent design. In this era, complex causal systems – with many interdependent parts that collectively serve a definite function – will be created by the cumulative work of many brains building upon each others’ efforts.

Skeptic:  I see – you think that brains might have something like a 50% speed advantage over natural selection?  So it might take a while for brains to catch up, but after another eight billion years, brains will be in the lead.  But this planet’s Sun will swell up by then, so –

Believer:  Thirty percent?  I was thinking more like three orders of magnitude. With thousands of brains working together and building on each others’ efforts, whole complex machines will be designed on the timescale of mere millennia – no, centuries!

Skeptic:  What?

Believer:  You heard me.

Continue reading "Surprised by Brains" »

GD Star Rating
loading...

Billion Dollar Bots

Robin presented a scenario in which whole brain emulations, or what he calls bots come into being.  Here is another:

Bots are created with hardware and software.  The higher the quality of one input the less you need of the other.  Hardware, especially with cloud computing, can be quickly allocated from one task to another.  So the first bot might run on hardware worth billions of dollars.

The first bot creators would receive tremendous prestige and a guaranteed place in the history books.  So once it becomes possible to create a bot many firms and rich individuals will be willing to create one even if doing so would cause them to suffer a large loss.

Imagine that some group has $300 million to spend on hardware and will use the money as soon as $300 million becomes enough to create a bot.  The best way to spend this money would not be to buy a $300 million computer but to rent $300 million of off-peak computing power.  If the group needed only 1,000 hours of computing power (which it need not buy all at once) to prove that it had created a bot then the group could have, roughly, $3 billion of hardware for the needed 1,000 hours.

It’s likely that the  first bot would run very slowly.  Perhaps it would take the bot 10 real seconds to think as much as a human does in one second.

Under my scenario the first bot would be wildly expensive.  But because of Moore’s law once the first bot was created everyone would expect that the cost of bots would eventually become low enough so that they would radically remake society.

Consequently, years before bots come to dominate the economy, many people will come to expect that within their lifetime bots will someday come to dominate the economy.   Bot expectations will radically change the world.

I suspect that after it becomes obvious that we could eventually create cheap bots world governments will devote trillions to bot Manhattan projects.  The expected benefits of winning the bot race will be so high that it would be in the self-interest of individual governments to not worry too much about bot friendliness.

The U.S. and Chinese militaries  might fall into a bot prisoners’ dilemma in which both militaries would prefer an outcome in which everyone slowed down bot development to ensure friendliness yet both nations were individually better off (regardless of what the other military did) taking huge chances on friendliness so as to increase the probability of their winning the bot race.

My hope is that the U.S. will have such a tremendous advantage over China that the Chinese don’t try to win the race and the U.S. military thinks it can afford to go slow.  But given China’s relatively high growth rate I doubt humanity will luck into this safe scenario.

GD Star Rating
loading...
Tagged as: ,

Brain Emulation and Hard Takeoff

The construction of a working brain emulation would require, aside from brain scanning equipment and computer hardware to test and run emulations on, highly intelligent and skilled scientists and engineers to develop and improve the emulation software. How many such researchers? A billion dollar project might employ thousands, of widely varying quality and expertise, who would acquire additional expertise over the course of a successful project that results in a working prototype. Now, as Robin says:

They would try multitudes of ways to cut corners on the emulation implementation, checking to see that their bot stayed sane.  I expect several orders of magnitude of efficiency gains to be found easily at first, but that such gains would quickly get hard to find.  While a few key insights would allow large gains, most gains would come from many small improvements.   

Some project would start selling bots when their bot cost fell substantially below the (speedup-adjusted) wages of a profession with humans available to scan.  Even if this risked more leaks, the vast revenue would likely be irresistible.   

To make further improvements they would need skilled workers up-to-speed on relevant fields and the specific workings of the project’s design. But the project above can now run an emulation at a cost substantially less than the wages it can bring in. In other words, it is now cheaper for the project to run an instance of one of its brain emulation engineers than it is to hire outside staff or collaborate with competitors. This is especially so because an emulation can be run at high speeds to catch up on areas it does not know well, faster than humans could be hired and brought up to speed, and then duplicated many times. The limiting resource for further advances is no longer the supply of expert humans, but simply computing hardware on which to run emulations.

Continue reading "Brain Emulation and Hard Takeoff" »

GD Star Rating
loading...
Tagged as: ,

Emulations Go Foom

Let me consider the AI-foom issue by painting a (looong) picture of the AI scenario I understand best, whole brain emulations, which I’ll call “bots.”  Here goes.

When investors anticipate that a bot may be feasible soon, they will estimate their chances of creating bots of different levels of quality and cost, as a function of the date, funding, and strategy of their project.  A bot more expensive than any (speedup-adjusted) human wage is of little direct value, but exclusive rights to make a bot costing below most human wages would be worth many trillions of dollars.

It may well be socially cost-effective to start a bot-building project with a 1% chance of success when its cost falls to the trillion dollar level.  But not only would successful investors probably only gain a small fraction of this net social value, is unlikely any investor group able to direct a trillion could be convinced the project was feasible – there are just too many smart-looking idiots making crazy claims around.

But when the cost to try a 1% project fell below a billion dollars, dozens of groups would no doubt take a shot.  Even if they expected the first feasible bots to be very expensive, they might hope to bring that cost down quickly.  Even if copycats would likely profit more than they, such an enormous prize would still be very tempting.

The first priority for a bot project would be to create as much emulation fidelity as affordable, to achieve a functioning emulation, i.e., one you could talk to and so on.  Few investments today are allowed a decade of red ink, and so most bot projects would fail within a decade, their corpses warning others about what not to try.  Eventually, however, a project would succeed in making an emulation that is clearly sane and cooperative.

Continue reading "Emulations Go Foom" »

GD Star Rating
loading...
Tagged as: , ,

Life’s Story Continues

Followup toThe First World Takeover

As last we looked at the planet, Life’s long search in organism-space had only just gotten started.

When I try to structure my understanding of the unfolding process of Life, it seems to me that, to understand the optimization velocity at any given point, I want to break down that velocity using the following abstractions:

  • The searchability of the neighborhood of the current location, and the availability of good/better alternatives in that rough region. Maybe call this the optimization slope.  Are the fruit low-hanging or high-hanging, and how large are the fruit?
  • The optimization resources, like the amount of computing power available to a fixed program, or the number of individuals in a population pool.
  • The optimization efficiency, a curve that gives the amount of searchpower generated by a given investiture of resources, which is presumably a function of the optimizer’s structure at that point in time.

Continue reading "Life’s Story Continues" »

GD Star Rating
loading...

Observing Optimization

Followup to: Optimization and the Singularity

In  "Optimization and the Singularity" I pointed out that history since the first replicator, including human history to date, has mostly been a case of nonrecursive optimization – where you’ve got one thingy doing the optimizing, and another thingy getting optimized.  When evolution builds a better amoeba, that doesn’t change the structure of evolution – the mutate-reproduce-select cycle.

But there are exceptions to this rule, such as the invention of sex, which affected the structure of natural selection itself – transforming it to mutate-recombine-mate-reproduce-select.

I was surprised when Robin, in "Eliezer’s Meta-Level Determinism" took that idea and ran with it and said:

…his view does seem to make testable predictions about history.  It suggests the introduction of natural selection and of human culture coincided with the very largest capability growth rate increases.  It suggests that the next largest increases were much smaller and coincided in biology with the introduction of cells and sex, and in humans with the introduction of writing and science.  And it suggests other rate increases were substantially smaller.

It hadn’t occurred to me to try to derive that kind of testable prediction.  Why?  Well, partially because I’m not an economist.  (Don’t get me wrong, it was a virtuous step to try.)  But also because the whole issue looked to me like it was a lot more complicated than that, so it hadn’t occurred to me to try to directly extract predictions.

What is this "capability growth rate" of which you speak, Robin? There are old, old controversies in evolutionary biology involved here.

Continue reading "Observing Optimization" »

GD Star Rating
loading...

AI Go Foom

It seems to me that it is up to [Eliezer] to show us how his analysis, using his abstractions, convinces him that, more likely than it might otherwise seem, hand-coded AI will come soon and in the form of a single suddenly super-powerful AI.

As this didn’t prod a response, I guess it is up to me to summarize Eliezer’s argument as best I can, so I can then respond.  Here goes:

A machine intelligence can directly rewrite its entire source code, and redesign its entire physical hardware.  While human brains can in principle modify themselves arbitrarily, in practice our limited understanding of ourselves means we mainly only change ourselves by thinking new thoughts.   All else equal this means that machine brains have an advantage in improving themselves. 

A mind without arbitrary capacity limits, that focuses on improving itself, can probably do so indefinitely.  The growth rate of its "intelligence" may be slow when it is dumb, but gets faster as it gets smarter.  This growth rate also depends on how many parts of itself it can usefully change.  So all else equal, the growth rate of a machine intelligence must be greater than the growth rate of a human brain. 

No matter what its initial disadvantage, a system with a faster growth rate eventually wins.  So if the growth rate advantage is large enough then yes a single computer could well go in a few days from less than human intelligence to so smart it could take over the world.  QED.

So Eliezer, is this close enough to be worth my response?  If not, could you suggest something closer?

GD Star Rating
loading...
Tagged as:

Whence Your Abstractions?

Reply toAbstraction, Not Analogy

Robin asks:

Eliezer, have I completely failed to communicate here?  You have previously said nothing is similar enough to this new event for analogy to be useful, so all we have is "causal modeling" (though you haven’t explained what you mean by this in this context).  This post is a reply saying, no, there are more ways using abstractions; analogy and causal modeling are two particular ways to reason via abstractions, but there are many other ways.

Well… it shouldn’t be surprising if you’ve communicated less than you thought.  Two people, both of whom know that disagreement is not allowed, have a persistent disagreement.  It doesn’t excuse anything, but – wouldn’t it be more surprising if their disagreement rested on intuitions that were easy to convey in words, and points readily dragged into the light?

I didn’t think from the beginning that I was succeeding in communicating.  Analogizing Doug Engelbart’s mouse to a self-improving AI is for me such a flabbergasting notion – indicating such completely different ways of thinking about the problem – that I am trying to step back and find the differing sources of our differing intuitions.

(Is that such an odd thing to do, if we’re really following down the path of not agreeing to disagree?)

"Abstraction", for me, is a word that means a partitioning of possibility – a boundary around possible things, events, patterns.  They are in no sense neutral; they act as signposts saying "lump these things together for predictive purposes".  To use the word "singularity" as ranging over human brains, farming, industry, and self-improving AI, is very nearly to finish your thesis right there.

Continue reading "Whence Your Abstractions?" »

GD Star Rating
loading...