Hutter on Singularity

Back in July I posted my response to Chalmers’ singularity essay, published in the Journal of Consciousness Studies (JCS) where his paper was published. A paper copy of a JCS issue with thirteen responses recently showed up in my mail, though no JCS electronic copy is yet available. [Added 4Mar: it is now here.] Reading through the responses, the best (besides mine) was by Marcus Hutter.

I didn’t learn much new, but compared to the rest, Hutter is relatively savvy on social issues. He isn’t sure if it is possible to be much more intelligent than a human (as opposed to just thinking faster), but he is sure there is lots of room for improvement overall:

The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. …

When building AIs or tinkering with our virtual selves, we could try out a lot of different goals. … But ultimately we will lose control, and the AGIs themselves will build further AGIs. … Some aspects of this might be independent of the initial goal structure and predictable. Probably this initial vorld is a society of cooperating and competing agents. There will be competition over limited (computational) resources, and those virtuals who have the goal to acquire them will naturally be more successful. … The successful virtuals will spread (in various ways), the others perish, and soon their society will consist mainly of virtuals whose goal is to compete over resources, where hostility will only be limited if this is in the virtuals’ best interest. For instance, current society has replaced war mostly by economic competition. … This world will likely neither be heaven nor hell for the virtuals. They will “like” to fight over resources, and the winners will “enjoy” it, while the losers will “hate” it. …

In the human world, local conflicts and global war is increasingly replaced by economic competition, which might itself be replaced by even more constructive global collaboration, as long as violaters can quickly and effectively (and non-violently?) be eliminated. It is possible that this requires a powerful single (virtual) world government, to give up individual privacy, and to severely limit individual freedom (cf. ant hills or bee hives).

Hutter noted (as have I) that cheap life is valued less:

Unless a global copy protection mechanism is deliberately installed, … copying virtual structures should be as cheap and effortless as it is for software and data today. The only cost is developing the structures in the first place, and the memory to store and the comp to run them. … One consequence … [is] life becoming much more diverse. …

Another consequence should be that life becomes less valuable. … Cheap machines decreased the value of physical labor. … In games, we value our own life and that of our opponents less than real life, … because games can be reset and one can be resurrected. … Why not participate in a dangerous fun activity. … It may be ethically acceptable to freeze, duplicate, slow-down, modify (brain experiments), or even kill (oneself or other) AIs at will, if they are abundant and/or backups are available, just what we are used to doing with software. So laws preventing experimentation with intelligences for moral reasons may not emerge.

Hutter also tried to imagine what such a society would look like from outside:

Imagine an inward explosion, where a fixed amount of matter is transformed into increasingly efficient computers until it becomes computronium. The virtual society like a well-functioning real society will likely evolve and progress, or at least change. Soon the speed of their affairs will make them beyond comprehension for the outsiders. … After a brief period, intelligent interaction between insiders and outsiders becomes impossible. …

Let us now consider outward explosion, where an increasing amount of matter is transformed into computers of fixed efficiency. … Outsiders will soon get into resource competition with the expanding computer world, and being inferior to the virtual intelligences, probably only have the option to flee. This might work for a while, but soon … escape becomes impossible, ending or converting the outsiders’ existence.

When foragers were outside of farmer societies, or farmers outside of industrial cities, change was faster on the inside, and the faster change got the harder it was for outsiders to understand. But there was no sharp boundary when understanding became “impossible.” While farmers were greedy for more land, and displaced foragers on farmable (or herd able) land quickly in farming doubling time terms, industry has been much less expansionary. While eventually industry might displace all farming, farming modes of production can continue to use land for many industry doubling times into an industrial revolution.

Similarly, a new faster economic growth mode might well continue to let old farming and industrial modes of production continue for a great many doubling times of the new mode. If land area is not central to the new mode of production, why expect old land uses to be quickly displaced?

GD Star Rating
Tagged as: ,
Trackback URL:
  • Michael Vassar

    I would tend to expect energy collection and dissipation and matter reorganization to be central to new modes of production. Not land per-se, but the physics that we summarize as ‘land’.

  • Carl Shulman

    a new faster economic growth mode might well continue to let old farming and industrial modes of production continue for a great many doubling times of the new mode.

    This claim seems bizarre. If doubling times of the new mode are a matter of weeks, as you claim, or less, then this is a very short period in absolute time. If a human is “displaced” or killed 52 weeks after the creation of efficient brain emulations in your scenario, should she be comforted that the time seemed longer to the emulations, as measured in economic doublings?

    • rationalist


    • billswift

      Not necessarily, if the new mode uses different, or at least mostly different, resources. Agriculture, for example, uses a lot of land area, industry uses land area, but much less land for its productivity; and industrial use of area mostly goes down as its efficiency and productivity go up.

  • Preferred Anonymous

    In short, I agree with Hanson. Its plainly evident that there is no good reason that robots will PHYSICALLY displace humans, except in our own minds. Military created AI is the largest threat, because those are intended to kill, but not much else. They do not displace anything (other than other military weapons).

    • lemmy caution

      “Its plainly evident that there is no good reason that robots will PHYSICALLY displace humans”

      If the AI doesn’t need humans and can physically displace humans, they will physically displace humans. Assuming that the AI will use a lot less resources and be a lot more productive than humans, they can do this just by pure economics.

      • Preferred Anonymous

        You seem to be getting can and will mixed up.

        They are not the same.

        (also to be able to physically misplace something, you must physically be able to manifest, something that is ony possible if you are physical. Computer AI are not physical, they are very nearly metaphysical).

        As for economics…no.

        A) Farms haven’t been replaced by cities (this should tell you something)

        B) The conclusion that economics dictates physical prescense does not follow. It is frankly absurd.

      • lemmy caution

        Farms have not been replaced by factories because factory workers and owners like to eat food. As the value of land for factories goes up, the value of land for farming will at least be at the level that sufficient food will be produced for the factory worker and owners.

        AI does not eat food. The rise of AI will decouple the value of food from the value created by the vast majority of thinking agents in society. This can easily lead to the costs of food rising to astronomical levels and humans starving. Think of it from the AIs perspective, if the same resources could support either a million AIs or one human, wouldn’t it be immoral to give the resources to the human?

  • Brandon Reinhart

    I came here to post what Vassar essentially did: that land use as outsiders consider it will not be the thing of value, but the surface area of the planet – or the intervening space above – will be valuable for free energy collection. Or the configuration of the planet, etc, presumably with externalities harmful to the outsiders.

  • Robin Hanson

    Michael, lemmy, and Brandon, the question is at what level of development does paving the Earth for energy collection become attractive. It seems to me that the economy must grow by many orders of magnitude before that would make much sense.

    Carl, the number of economic doublings between now and some future time seems to me a good proxy for the difficulty of controlling such future times from our current time. The more such doublings, the harder such a future is to control. You shouldn’t expect to be able to control the future forever from today.

    • Carl Shulman

      That nonresponsive to the Hutter quote, which is talking about “soon” from the perspective of the humans outside, for whom a year is not a long time before they are overrun.

      When you say:

      why expect old land uses to be quickly displaced?

      following the Hutter quote you convey the impression that you are actually contesting Hutter’s claim, rather than changing the subject.

    • Robin Hanson

      Hutter’s “soon” is ambiguous. So rather than argue about what he meant, it makes more sense to say what we can about more well defined timescales. One we can speed up or slow down human lives by many orders of magnitude, it makes more less sense to focus on a ordinary human lifetime as the main timescale worth considering.

      • Carl Shulman

        There are sensible purposes for which it makes sense that if talking about members of group A being “ended or converted” we would look at the timescale of their lifetimes, e.g. current people may be concerned for various reasons about how much time they have left, given human-level AI, or want to know how dangerous it will be to them personally.

        For what purpose is the “inside” timescale a useful measure for the remaining lifetime of the terrestrial “outside”?

    • billswift

      For beings not tied to the biosphere, making solar power satellites out of lunar resources makes much more economic sense than “paving the Earth”. Eventually, all of the planets will probably be broken up for “computronium”, but that is the distant future and will likely not be determined by how AI begins.

      • daedalus2u

        Except there are economies of scale and minimum costs to achieve lunar derived power satellites. If property is still controlled by individuals, those individuals will maximize their economic gain via monopoly control of access to what their property produces.

        An individual AI might not be able to afford a power satellite, but that AI might be able to afford 0.1 hectare of land and to cover it with photovoltaics. That would produce ~25,000 kwhr per year.

        Wheat yield is ~ 2,243 kg/hectare. A kg of wheat has ~15,000 kJ, a 2000 calorie a day diet is 8,400 kJ. 2000 calories of wheat would take about 0.1 hectare. Much of the US receives an annual average of 4-5 kwhr/m2 per day. Solar photo voltaics are ~15% efficient. 0.1 hectare is 1,000 m2. A landowner can grow 1 years worth of food for a human, or can produce ~25,000 kwhrs.

        If AIs can bid up the price of power such that 25,000 kwhr of electricity is worth more than 250 kg of wheat, then the Earth will be paved over to supply that market.

        Electric power is ~$25/Mwhr. Wheat is ~$0.26 per kg. 0.1 hectare of land can produce $625 of electric power, or $65 worth of wheat.

        Free market economics dictates that the Earth will be paved over.

      • John

        Daedalus, photovoltaics can be built on land with soil unsuited for wheat. Such land is, in fact, considerably cheaper per unit area, at current market rates. Furthermore, global climate change will likely destroy the agricultural productivity of a lot of present day farmland, driving up prices on the remainder as surviving meat-persons engage in desperate subterfuge, genocide, etc. AIs might just leave the good farmland alone as if it were a nature preserve.

      • daedalus2u

        John, at current market rates, yes. But once AIs are cheaper to produce and operate than humans, AIs will increase in number at a faster rate than humans do. Eventually AIs will dominate the economy and the market place will serve AI wants, not human wants.

        If AIs have a doubling time of a year, then 50 years after the first one there are ~100,000 times more AIs than humans.

        Why would AIs want nature reserves? They would probably want to cause a global ice age because at low temperatures electronics have longer lives and are more efficient. It is more cost effective to sequence plants, animals and bacteria and just store the DNA sequences. Then the surface area can be used to generate electricity.

        At some point they would want to get rid of O2 in the atmosphere because it corrodes metal and can result in fires. AIs need to use the surface for cooling. To maximize cooling they would want to remove all greenhouse gases from the atmosphere. Get the temperature down and the atmosphere becomes very clear due to the reduction in water vapor. Take the CO2 out and the temperature goes down. That improves solar cell efficiency, increases land area by lowering sea level. Maybe they would use solar satellites to lower surface temperatures still more.

        Maybe they would give humans a few decades notice, maybe not. Humans don’t care about climate change. Why would AIs care when humans don’t?

    • Richard Hollerith

      >the number of economic doublings between now and some future time seems to me a good proxy for the difficulty of controlling such future times from our current time.

      That sounds plausible at first, but becomes less plausible when one considers that in some endeavors, it is routine to be around 99% confident of the outcome of a casual chain 10^16 cause-and-effects long: in particular, suppose you run a computer operating at one GHz on a computation that last 6 months. If the purpose of the computation was to factor a big number and if the people programming and operating the computer are experienced in such computations, it is possible to be 99% certain that the outputs at the end of the computation 6 months from now really will be factors of the number the computation started with (which in this case is the aspect or property of the outcome over which control was desired).

  • Paul Christiano

    The discussion of continued human existence on the periphery seems beside the point from the perspective of most who are concerned with AI risk. Even if machine intelligences and humans peacefully coexist, the machine intelligences inherit most of the universe in this account. If their interaction doesn’t tend to guide development in ways we like, most of the universe’s potential has been squandered.

    Hutter’s vision of a singularity seems impressively reasonable and coherent, overall.

    • Carl Shulman

      The discussion of continued human existence on the periphery seems beside the point from the perspective of most who are concerned with AI risk.

      From a total utilitarian point of view, the fate of humans at the periphery is dwarfed by the risk of lost utility in a Hansonian competitive scenario (wasted resources like the burnt cosmic commons, reallocation of resource from conscious experience to defense and offense, selection for low-value entities, etc).

      But the empirical claim that the total utilitarian view is the sole perspective of most concerned with AI risk is false. That is a major consideration for many, but even those who take this seriously are rarely blase about their own survival and that of all their intimates, associates, and contemporaries.

  • Tim Tyler

    Hutter calls Chalmers a “consciousness expert” and says he basically agrees with him. That seems very strange. Most of the other people in the field I have encountered side with Hofstadter and Dennett – and think that Chalmers has been barking up the wrong tree.

    I’m disappointed that Hutter endorsed the daft “singularity” terminology – despite the phenomenon failing to conform to his own definition of the term.

  • Carl Shulman

    Chalmer’s consciousness claims in that article are functionalist ones: that AIs functionally similar to humans will also be conscious, and that uploading can be a “good enough” form of survival or continuity of identity. One can agree with all that from a physicalist perspective, without committing oneself to the dualism that Chalmers advocates elsewhere.

    • Tim Tyler

      Saying that Chalmers “has extensively written about it in general” and “I essentially agree with his assessments” is suggestive of a broader agreement. I would council exercising caution if broadly agreeing with Chalmers. The “singularity” article might not have been too bad – but the realms of previous material about zombies and materialism was pretty bonkers – and is fairly widely derided by those involved with machine intellligence.

      • Carl Shulman

        Sure. Also, the word is “counsel,” not “council.”

  • gwern
  • Brooks

    Now this blog is called Overcoming Bias, and I feel that there is one piece of bias at work here that needs to be overcome: it’s easy to make competition the most important aspect of evolution when one lives in a society that has an almost pathological obsession with competition. But competition isn’t the only force at work in evolution. Symbiosis and cooperation are equally important. What a technological singularity would look like is by its very definition, unpredictable. AIs may just as well turn into electronic Buddhas as they might turn into economic resource warlords.

    • Poelmo


      If humans are stupid enough to create AIs programmed to desire cutthroat capitalism than it is the humans, not the AIs, who are responsible for the destruction of worlds. I’m hoping AIs will be so smart they’ll quickly figure out that “he who dies with the most toys, wins” is merely a self-defeating illusion created by idiotic humans who have the accumulation of wealth in order to attract females hardwired into their dna.

      • Wonks Anonymous

        Eliezer Yudkowsky has already written about how hoping that A.Is are “smart enough to realize my ethics are correct” is foolish. A.Is will do what they are programmed to do.

        This blog has also discussed co-operation (Hanson tends to use the term “coordination”) in evolution and for ems. And it seems foolish to attribute a focus on competition to being American. The most prominent popular text with that focus is Richard Dawkins’ “The Selfish Gene”, though he remarked that it could have just as easily been called “The Cooperative Gene”.