Tag Archives: Future

Growth Is Change. So Is Death.

The very readable book The Wizard and the Prophet tells the story of environmental prophet William Vogt investigating the apocalypse-level deaths of guano-making birds near Peru. When he discovered the cause in the El Nino weather cycle, his policy recommendations were to do nothing to mitigate this natural cause; he instead railed against many much smaller human influences, demanding their reversal. A few years later his classic 1948 screed Road To Survival, which contained pretty much all the standard environmental advice and concepts used today, continued to warn against any but small human-caused changes to the environment, while remaining largely indifferent to even huge natural changes.

I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

But of course few are very good at resolving their near versus far incoherences. And so the positions people take end up depending a lot on how they first framed the key issues, as in terms of short or long term changes.

GD Star Rating
loading...
Tagged as: , ,

On Value Drift

The outcomes within any space-time region can be seen as resulting from 1) preferences of various actors able to influence the universe in that region, 2) absolute and relative power and influence of those actors, and 3) constraints imposed by the universe. Changes in outcomes across regions result from changes in these factors.

While you might mostly approve of changes resulting from changing constraints, you might worry more about changes due to changing values and influence. That is, you likely prefer to see more influence by values closer to yours. Unfortunately, the consistent historical trend has been for values to drift over time, increasing the distance between random future and current values. As this trend looks like a random walk, we see no obvious limit to how far values can drift. So if the value you place on the values of others falls rapidly enough with the distance between values, you should expect long term future values to be very wrong.

What influences value change?
Inertia – The more existing values are tied to important entrenched systems, the less they change.
Growth – On average, over time civilization collects more total influence over most everything.
Competition – If some values consistently win key competitive contests, those values become more common.
Influence Drift – Many processes that change the world produce random drift in agent influence.
Internal Drift – Some creatures, e.g., humans, have values that drift internally in complex ways.
Culture Drift – Some creatures, e.g., humans, have values that change together in complex ways.
Context – Many of the above processes depend on other factors, such as technology, wealth, a stable sun, etc.

For many of the above processes, rates of change are roughly proportional to overall social rates of change. As these rates of change have been increased over time, we should expect faster future change. Thus you should expect values to drift faster in the future than then did in the past, leading faster to wrong values. Also, people are living longer now than they did in the past. So even past people didn’t live long enough to see big enough changes to greatly bother them, future people may live to see much more change.

Most increases in the rates of change have been concentrated in a few sudden large jumps (associated with the culture, farmer, and industry transitions). As a result, you should expect that rates of change may soon increase greatly. Value drift may continue at past rates until it suddenly goes much faster.

Perhaps you discount the future rapidly, or perhaps the value you place on other values falls slowly with value distance. In these cases value drift may not disturb you much. Otherwise, the situation described above may seem pretty dire. Even if previous generations had to accept the near inevitability of value drift, you might not accept it now. You may be willing to reach for difficult and dangerous changes that could remake the whole situation. Such as perhaps a world government. Personally I see that move as too hard and dangerous for now, but I could understand if you disagree.

The people today who seem most concerned about value drift also seem to be especially concerned about humans or ems being replaced by other forms of artificial intelligence. Many such people are also concerned about a “foom” scenario of a large and sudden influence drift: one initially small computer system suddenly becomes able to grow far faster than the rest of the world put together, allowing it to quickly take over the world.

To me, foom seems unlikely: it posits an innovation that is extremely lumpy compared to historical experience, and in addition posits an unusually high difficulty of copying or complementing this innovation. Historically, innovation value has been distributed with a long thin tail: most realized value comes from many small innovations, but we sometimes see lumpier innovations. (Alpha Zero seems only weak evidence on the distribution of AI lumpiness.) The past history of growth rates increases suggests that within a few centuries we may see something, perhaps a very lumpy innovation, that causes a growth rate jump comparable in size to the largest jumps we’ve ever seen, such as at the origins of life, culture, farming, and industry. However, as over history the ease of copying and complementing such innovations has been increasing, it seems unlikely that copying and complementing will suddenly get much harder.

While foom seems unlikely, it does seems likely that within a few centuries we will develop machines that can outcompete biological humans for most all jobs. (Such machines might also outcompete ems for jobs, though that outcome is much less clear.) The ability to make such machines seems by itself sufficient to cause a growth rate increase comparable to the other largest historical jumps. Thus the next big jump in growth rates need not be associated with a very lumpy innovation. And in the most natural such scenarios, copying and complementing remain relatively easy.

However, while I expect machines that outcompete humans for jobs, I don’t see how that greatly increases the problem of value drift. Human cultural plasticity already ensures that humans are capable of expressing a very wide range of values. I see no obviously limits there. Genetic engineering will allow more changes to humans. Ems inherit human plasticity, and may add even more via direct brain modifications.

In principle, non-em-based artificial intelligence is capable of expressing the entire space of possible values. But in practice, in the shorter run, such AIs will take on social roles near humans, and roles that humans once occupied. This should force AIs to express pretty human-like values. As Steven Pinker says:

Artificial intelligence is like any other technology. It is developed incrementally, designed to satisfy multiple conditions, tested before it is implemented, and constantly tweaked for efficacy and safety.

If Pinker is right, the main AI risk mediated by AI values comes from AI value drift that happens after humans (or ems) no longer exercise such detailed frequent oversight.

It may be possible to create competitive AIs with protected values, i.e., so that parts where values are coded are small, modular, redundantly stored, and insulated from changes to the rest of the system. If so, such AIs may suffer much less from internal drift and cultural drift. Even so, the values of AIs with protected values should still drift due to influence drift and competition.

Thus I don’t see why people concerned with value drift should be especially focused on AI. Yes, AI may accompany faster change, and faster change can make value drift worse for people with intermediate discount rates. (Though it seems to me that altruistic discount rates should scale with actual rates of change, not with arbitrary external clocks.)

Yes, AI offers more prospects for protected values, and perhaps also for creating a world/universe government capable of preventing influence drift and competition. But in these cases if you are concerned about value drift, your real concerns are about rates of change and world government, not AI per se. Even the foom scenario just temporarily increases the rate of influence drift.

Your real problem is that you want long term stability in a universe that more naturally changes. Someday we may be able to coordinate to overrule the universe on this. But I doubt we are close enough to even consider that today. To quote a famous prayer:

God, grant me the serenity to accept the things I cannot change,
Courage to change the things I can,
And wisdom to know the difference.

For now value drift seems one of those possibly lamentable facts of life that we cannot change.

GD Star Rating
loading...
Tagged as: , ,

Small Change Good, Big Change Bad?

Recently I posted on how many seek spiritual insight via cutting the tendency of their minds to wander, yet some like Scott Alexandar fear ems with a reduced tendency to mind wandering because they’d have less moral value. On twitter Scott clarified that he doesn’t mind modest cuts in mind wandering; what he fears is extreme cuts. And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.

On nature preserves, some fear eventually losing all of wild nature, but when arguing for any particular development others say we need new things and we still have plenty of nature. On military spending, some say the world is peaceful and we have many things we’d rather spend money on, while others say that societies who do not remain militarily vigilant are eventually conquered. On increasing inequality some say that high enough inequality must eventually result in inadequate human capital investments and destructive revolutions, while others say there’s little prospect of revolution now and inequality has historically only fallen much in big disasters such as famine, war, and state collapse. On value drift, some say it seems right to let each new generation choose its values, while others say a random walk in values across generations must eventually drift very far from current values.

If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.

Third, our ability to foresee the future rapidly declines with time. The more other things that may happen between today and some future date, the harder it is to foresee what may happen at that future date. We should be increasingly careful about the inferences we draw about longer terms.

Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

GD Star Rating
loading...
Tagged as: ,

The One Ruler Obsession

I often teach undergraduate law & economics. Sometimes the first paper I assign is to suggest property rules to deal with conflicts regarding asteroids, orbits, and sunlight in the solar system, in the future when there’s substantial activity out there. This feels to students like a complex different situation, and in fact few understand even the basic issues.

Given just two pages to make their case, a large fraction of students (~1/3?) express fear that one person or organization will take over the entire solar system, unless property rules are designed to explicitly prevent that. And a similar fraction suggest the “property rule” of having a single government agency answer all questions. Whatever question or dispute you have, fill out a form, and the agency will decide.

Yet in my lectures I talk a lot about concepts and issues of property rights, but never mention government agency issues or scenarios, nor the scenario of one power taking over everything. And econ undergrads at my school are famous for being relatively libertarian.

I conclude that most people have a strong innate fear of power concentrations, and yet also see the creation of a single central power as an attractive general solution to complicated problems. I’ve seen the same sort of thing with a great many futuristic tech and policy issues. Whatever the question, if it seems complicated, most people are concerned about inequality, especially that it might be taken to the max, and yet they also like the idea of creating a central government-like power to deal with it.

I’ve certainly seen this in concerns about future rampaging robots (= “AI risk”). Many, perhaps most, people express concerns that one AI could take over everything, and many also like the “solution” of one good AI taking over everything.

I recently came across similar reasoning by Frederick Engels back in 1844, in his Outlines of a Critique of Political Economy. Having seen the early industrial revolution, not understanding it well, but fearing where it might lead, Engels claims that the natural outcome is extreme concentration of power. And his solution is to create a different central power (e.g., communism). Of course while there was some increase in inequality and concentration, it wasn’t remotely as bad as Engels feared, except where his words helped to inspire the creation of such concentration. Here is Engels:

Thus, competition sets capital against capital, labour against labour, landed property against landed property; and likewise each of these elements against the other two. In the struggle the stronger wins; and in order to predict the outcome of the struggle, we shall have to investigate the strength of the contestants. First of all, labour is weaker than either landed property or capital, for the worker must work to live, whilst the landowner can live on his rent, and the capitalist on his interest, or, if the need arises, on his capital or on capitalised property in land. The result is that only the very barest necessities, the mere means of subsistence, fall to the lot of labour; whilst the largest part of the products is shared between capital and landed property. Moreover, the stronger worker drives the weaker out of the market, just as larger capital drives out smaller capital, and larger landed property drives out smaller landed property. Practice confirms this conclusion. The advantages which the larger manufacturer and merchant enjoy over the smaller, and the big landowner over the owner of a single acre, are well known. The result is that already under ordinary conditions, in accordance with the law of the stronger, large capital and large landed property swallow small capital and small landed property – i.e., centralisation of property. In crises of trade and agriculture, this centralisation proceeds much more rapidly.

In general large property increases much more rapidly than small property, since a much smaller portion is deducted from its proceeds as property-expenses. This law of the centralisation of private property is as immanent in private property as all the others. The middle classes must increasingly disappear until the world is divided into millionaires and paupers, into large landowners and poor farm labourers. All the laws, all the dividing of landed property, all the possible splitting-up of capital, are of no avail: this result must and will come, unless it is anticipated by a total transformation of social conditions, a fusion of opposed interests, an abolition of private property.

Free competition, the keyword of our present-day economists, is an impossibility. Monopoly at least intended to protect the consumer against fraud, even if it could not in fact do so. The abolition of monopoly, however, opens the door wide to fraud. You say that competition carries with it the remedy for fraud, since no one will buy bad articles. But that means that everyone has to be an expert in every article, which is impossible. Hence the necessity for monopoly, which many articles in fact reveal. Pharmacies, etc., must have a monopoly. And the most important article – money – requires a monopoly most of all. Whenever the circulating medium has ceased to be a state monopoly it has invariably produced a trade crisis; and the English economists, Dr. Wade among them, do concede in this case the necessity for monopoly. But monopoly is no protection against counterfeit money. One can take one’s stand on either side of the question: the one is as difficult as the other. Monopoly produces free competition, and the latter, in turn, produces monopoly. Therefore both must fall, and these difficulties must be resolved through the transcendence of the principle which gives rise to them. (more)

GD Star Rating
loading...
Tagged as: ,

How Big Future Change?

The world has seen a lot of very big changes over the last few centuries. Many of these changes seem so large, in fact, that it is hard to see how changes over the next few centuries could be remotely as large. For example, many “big swing” parameters have moved from one extreme to the other, changing by more than half of the total range possible for that parameter. So the only way future changes could be as large in such a parameter is if it completely reversed direction to move back to the opposite extreme.

For example, once only a small percentage of people lived in cities; now more than half do. Once only a few nations were democratic, now more than half are. Once many people were slaves, now there are very few slaves. Once people worked nearly as many hours a week as possible, now they work less than half of their waking hours. Once nations were frequently at war, now war is rare. Once lifespans were near 30 years, now they are near 80, and some say 120 is the max possible. Once few people could read, now most can. Once genders and races were treated quite unequally, now treatment is more equal than unequal. Once engines and solar cells had low efficiency, now efficiency is half or more of the theoretical maximum. And so on.

If these big-swing parameters encompassed most of what we cared about in change, and if it is in fact implausible for such parameters to reverse back to their opposite extremes, then the conclusion seems inescapable: future change must be less than past change.

But pause to ask: how sure can we be that these big swing parameters encompass a large fraction of what matters within what can change? And notice a big selection effect: even when rates of change are constant overall, the particular parameters that happened to change the most in the recent past will in general not be the ones that change the most in the near future. So for those big past changing params future change will be less, even though overall rates of change stay steady. Maybe we spend so much time focusing on the parameters that have recently changed most, that we forget how many other parameters remain which are available to change in the future.

My book Age of Em might be taken as a demonstration that big future change remain possible. And we might also test this selection effect via a historical analysis. We might, for example, look at params that changed the most from the year 500 to the year 1000, at least as people in the year 1000 would have seen them, and then ask if those particular parameters changed more or less during the period from 1000 to 1500. Repeat for many different times and places.

GD Star Rating
loading...
Tagged as: ,

Reply to Christiano on AI Risk

Paul Christiano was one of those who encouraged me to respond to non-foom AI risk concerns. Here I respond to two of his posts he directed me to. The first one says we should worry about the following scenario:

Imagine using [reinforcement learning] to implement a decentralized autonomous organization (DAO) which maximizes its profit. .. to outcompete human organizations at a wide range of tasks — producing and selling cheaper widgets, but also influencing government policy, extorting/manipulating other actors, and so on.

The shareholders of such a DAO may be able to capture the value it creates as long as they are able to retain effective control over its computing hardware / reward signal. Similarly, as long as such DAOs are weak enough to be effectively governed by existing laws and institutions, they are likely to benefit humanity even if they reinvest all of their profits.

But as AI improves, these DAOs would become much more powerful than their human owners or law enforcement. And we have no ready way to use a prosaic AGI to actually represent the shareholder’s interests, or to govern a world dominated by superhuman DAOs. In general, we have no way to use RL to actually interpret and implement human wishes, rather than to optimize some concrete and easily-calculated reward signal. I feel pessimistic about human prospects in such a world. (more)

In a typical non-foom world, if one DAO has advanced abilities, then most other organizations, including government and the law, have similar abilities. So such DAOs shouldn’t find it much easier to evade contracts or regulation than do organizations today. Thus humans can be okay if law and government still respect human property rights or political representation. Sure it might be hard to trust such a DAO to manage your charity, if you don’t trust it to judge who is in most need. But you might trust it much to give you financial returns on your financial investments in it.

Paul Christiano’s second post suggests that the arrival of AI arrives will forever lock in the distribution of patient values at that time:

The distribution of wealth in the world 1000 years ago appears to have had a relatively small effect—or more precisely an unpredictable effect, whose expected value was small ex ante—on the world of today. I think there is a good chance that AI will fundamentally change this dynamic, and that the distribution of resources shortly after the arrival of human-level AI may have very long-lasting consequences. ..

Whichever values were most influential at one time would remain most influential (in expectation) across all future times. .. The great majority of resources are held by extremely patient values. .. The development of machine intelligence may move the world much closer to this naïve model. .. [Because] the values of machine intelligences can (probably, eventually) be directly determined by their owners or predecessors. .. it may simply be possible to design a machine intelligence who exactly shares their predecessor’s values and who can serve as a manager. .. the arrival of machine intelligence may lead to a substantial crystallization of influence .. an event with long-lasting consequences. (more)

That is, Christiano says future AI won’t have problems preserving its values over time, nor need it pay agency costs to manage subsystems. Relatedly, Christiano elsewhere claims that future AI systems won’t have problems with design entrenchment:

Over the next 100 years greatly exceeds total output over all of history. I agree that coordination is hard, but even spending a small fraction of current effort on exploring novel redesigns would be enough to quickly catch up with stuff designed in the past.

A related claim, that Christiano supports to some degree, is that future AI are smart enough to avoid suffers from coordination failures. They may even use “acasual trade” to coordinate when physical interaction of any sort is impossible!

In our world, more competent social and technical systems tend to be larger and more complex, and such systems tend to suffer more (in % cost terms) from issues of design entrenchment, coordination failures, agency costs, and preserving values over time. In larger complex systems, it becomes harder to isolate small parts that encode “values”; a great many diverse parts end up influencing what such systems do in any given situation.

Yet Christiano expects the opposite for future AI; why? I fear his expectations result more from far view idealizations than from observed trends in real systems. In general, we see things far away in less detail, and draw inferences about them more from top level features and analogies than from internal detail. Yet even though we know less about such things, we are more confident in our inferences! The claims above seem to follow from the simple abstract description that future AI is “very smart”, and thus better in every imaginable way. This is reminiscent of medieval analysis that drew so many conclusions about God (including his existence) from the “fact” that he is “perfect.”

But even if values will lock in when AI arrives, and then stay locked, that still doesn’t justify great efforts to study AI control today, at least relative to the other options of improving our control mechanisms in general, or saving resources now to spend later, either on studying AI control problems when we know more about AI, or just to buy influence over the future when that comes up for sale.

GD Star Rating
loading...
Tagged as: , , ,

An Outside View of AI Control

I’ve written much on my skepticism of local AI foom (= intelligence explosion). Recently I said that foom offers the main justification I understand for AI risk efforts now, as well as being the main choice of my Twitter followers in a survey. It was the main argument offered by Eliezer Yudkowsky in our debates here at this blog, by Nick Bostrom in his book Superintelligence, and by Max Tegmark in his recent book Life 3.0 (though he denied so in his reply here).

However, some privately complained to me that I haven’t addressed those with non-foom-based AI concerns. So in this post I’ll consider AI control in the context of a prototypical non-em non-foom mostly-peaceful outside-view AI scenario. In a future post, I’ll try to connect this to specific posts by others on AI risk.

An AI scenario is where software does most all jobs; humans may work for fun, but they add little value. In a non-em scenario, ems are never feasible. As foom scenarios are driven by AI innovations that are very lumpy in time and organization, in non-foom scenarios innovation lumpiness is distributed more like it is in our world. In a mostly-peaceful scenario, peaceful technologies of production matter much more than do technologies of war and theft. And as an outside view guesses that future events are like similar past events, I’ll relate future AI control problems to similar past problems. Continue reading "An Outside View of AI Control" »

GD Star Rating
loading...
Tagged as: , ,

Humans Cells In Multicellular Future Minds?

In general, adaptive systems vary along an axis from general to specific. A more general system works better (either directly or after further adaptation) in a wider range of environments, and also with a wider range of other adapting systems. It does this in part via having more useful modularity and abstraction. In contrast, a more specific system adapts to a narrower range of specific environments and other subsystems.

Systems that we humans consciously design tend to be more general, i.e., less context dependent, relative to the “organic” systems that they often replace. For example, compare grid-like city street plans to locally evolved city streets, national retail outlets to locally arising stores and restaurants, traditional to permaculture farms, hotel rooms to private homes, big formal firms to small informal teams, uniforms to individually-chosen clothes, and refactored to un-refactored software. The first entity in each pair tends to more easily scale and to match more environments, while the second in each pair tends to be adapted in more detail to particular local conditions. Continue reading "Humans Cells In Multicellular Future Minds?" »

GD Star Rating
loading...
Tagged as: ,

Tegmark’s Book of Foom

Max Tegmark says his new book, Life 3.0, is about what happens when life can design not just its software, as humans have done in Life 2.0, but also its hardware:

Life 1.0 (biological stage) evolves its hardware and software
Life 2.0 (cultural stage) evolves its hardware, designs much of its software
Life 3.0 (technological stage): designs its hardware and software ..
Many AI researchers think that Life 3.0 may arrive during the coming century, perhaps even during our lifetime, spawned by progress in AI. What will happen, and what will this mean for us? That’s the topic of this book. (29-30)

Actually, its not. The book says little about redesigning hardware. While it says interesting things on many topics, its core is on a future “singularity” where AI systems quickly redesign their own software. (A scenario sometimes called “foom”.)

The book starts out with a 19 page fictional “scenario where humans use superintelligence to take over the world.” A small team, apparently seen as unthreatening by the world, somehow knows how to “launch” a “recursive self-improvement” in a system focused on “one particular task: programming AI Systems.” While initially “subhuman”, within five hours it redesigns its software four times and becomes superhuman at its core task, and so “could also teach itself all other humans skills.”

After five more hours and redesigns it can make money by doing half of the tasks at Amazon Mechanical Turk acceptably well. And it does this without having access to vast amounts of hardware or to large datasets of previous performance on such tasks. Within three days it can read and write like humans, and create world class animated movies to make more money. Over the next few months it goes on to take over the news media, education, world opinion, and then the world. It could have taken over much faster, except that its human controllers were careful to maintain control. During this time, no other team on Earth is remotely close to being able to do this.

Tegmark later explains: Continue reading "Tegmark’s Book of Foom" »

GD Star Rating
loading...
Tagged as: , ,

Can Human-Like Software Win?

Many, perhaps most, think it obvious that computer-like systems will eventually be more productive than human-like systems in most all jobs. So they focus on how humans might maintain control, even after this transition. But this eventuality is less obvious than it seems, depending on what exactly one means by “human-like” or “computer-like” systems. Let me explain.

Today the software that sits in human brains is stuck in human brain hardware, while the other kinds of software that we write (or train) sit in the artificial hardware that we make. And this artificial hardware has been improving rapidly far more rapidly than has human brain hardware. Partly as a result of this, systems of artificial software and hardware have been improving rapidly compared to human brain systems.

But eventually we will find a way to transfer the software from human brains into artificial hardware. Ems are one way to do this, as a relatively direct port. But other transfer mechanics may be developed.

Once human brain software is in the same sort of artificial computing hardware as all the other software, then the relative productivity of different software categories comes down to a question of quality: which categories of software tend to be more productive on which tasks?

Of course there will many different variations available within each category, to match to different problems. And the overall productivity of each category will depend both on previous efforts to develop and improve software in that category, and also on previous investments in other systems to match and complement that software. For example, familiar artificial software will gain because we have spent longer working to match it to familiar artificial hardware, while human software will gain from being well matched to complex existing social systems, such as language, firms, law, and government.

People give many arguments for why they expect human-like software to mostly lose this future competition, even when it has access to the same hardware. For example, they say that other software could lack human biases and also scale better, have more reliable memory, communicate better over wider scopes, be easier to understand, have easier meta-control and self-modification, and be based more directly on formal abstract theories of learning, decision, computation, and organization.

Now consider two informal polls I recently gave my twitter followers:

Surprisingly, at least to me, the main reason that people expect human-like software to lose is that they mostly expect whole new categories of software to appear, categories quite different from both the software in the human brain and also all the many kinds of software with which we are now familiar. If it comes down to a contest between human-like and familiar software categories, only a quarter of them expect human-like to lose big.

The reason I find this surprising is that all of the reasons that I’ve seen given for why human-like software could be at a disadvantage seem to apply just as well to familiar categories of software. In addition, a new category must start with the disadvantages of having less previous investment in that category and in matching other systems to it. That is, none of these are reasons to expect imagined new categories of software to beat familiar artificial software, and yet people offer them as reasons to think whole new much more powerful categories will appear and win.

I conclude that people don’t mostly use specific reasons to conclude that human-like software will lose, once it can be moved to artificial hardware. Instead they just have a general belief that the space of possible software is huge and contains many new categories to discover. This just seems to be the generic belief that competition and innovation will eventually produce a lot of change. Its not that human-like software has any overall competitive disadvantage compared to concrete known competitors; it is at least as likely to have winning descendants as any such competitors. Its just that our descendants are likely to change a lot as they evolve over time. Which seems to me a very different story than the humans-are-sure-to-lose story we usually hear.

GD Star Rating
loading...
Tagged as: , ,