Back in July I posted my response to Chalmers’ singularity essay, published in the Journal of Consciousness Studies (JCS) where his paper was published. A paper copy of a JCS issue with thirteen responses recently showed up in my mail, though no JCS electronic copy is yet available. [Added 4Mar: it is now here.] Reading through the responses, the best (besides mine) was by Marcus Hutter.
I didn’t learn much new, but compared to the rest, Hutter is relatively savvy on social issues. He isn’t sure if it is possible to be much more intelligent than a human (as opposed to just thinking faster), but he is sure there is lots of room for improvement overall:
The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. …
When building AIs or tinkering with our virtual selves, we could try out a lot of different goals. … But ultimately we will lose control, and the AGIs themselves will build further AGIs. … Some aspects of this might be independent of the initial goal structure and predictable. Probably this initial vorld is a society of cooperating and competing agents. There will be competition over limited (computational) resources, and those virtuals who have the goal to acquire them will naturally be more successful. … The successful virtuals will spread (in various ways), the others perish, and soon their society will consist mainly of virtuals whose goal is to compete over resources, where hostility will only be limited if this is in the virtuals’ best interest. For instance, current society has replaced war mostly by economic competition. … This world will likely neither be heaven nor hell for the virtuals. They will “like” to fight over resources, and the winners will “enjoy” it, while the losers will “hate” it. …
In the human world, local conflicts and global war is increasingly replaced by economic competition, which might itself be replaced by even more constructive global collaboration, as long as violaters can quickly and effectively (and non-violently?) be eliminated. It is possible that this requires a powerful single (virtual) world government, to give up individual privacy, and to severely limit individual freedom (cf. ant hills or bee hives).
Hutter noted (as have I) that cheap life is valued less:
Unless a global copy protection mechanism is deliberately installed, … copying virtual structures should be as cheap and effortless as it is for software and data today. The only cost is developing the structures in the first place, and the memory to store and the comp to run them. … One consequence … [is] life becoming much more diverse. …
Another consequence should be that life becomes less valuable. … Cheap machines decreased the value of physical labor. … In games, we value our own life and that of our opponents less than real life, … because games can be reset and one can be resurrected. … Why not participate in a dangerous fun activity. … It may be ethically acceptable to freeze, duplicate, slow-down, modify (brain experiments), or even kill (oneself or other) AIs at will, if they are abundant and/or backups are available, just what we are used to doing with software. So laws preventing experimentation with intelligences for moral reasons may not emerge.
Hutter also tried to imagine what such a society would look like from outside:
Imagine an inward explosion, where a fixed amount of matter is transformed into increasingly efficient computers until it becomes computronium. The virtual society like a well-functioning real society will likely evolve and progress, or at least change. Soon the speed of their affairs will make them beyond comprehension for the outsiders. … After a brief period, intelligent interaction between insiders and outsiders becomes impossible. …
Let us now consider outward explosion, where an increasing amount of matter is transformed into computers of fixed efficiency. … Outsiders will soon get into resource competition with the expanding computer world, and being inferior to the virtual intelligences, probably only have the option to flee. This might work for a while, but soon … escape becomes impossible, ending or converting the outsiders’ existence.
When foragers were outside of farmer societies, or farmers outside of industrial cities, change was faster on the inside, and the faster change got the harder it was for outsiders to understand. But there was no sharp boundary when understanding became “impossible.” While farmers were greedy for more land, and displaced foragers on farmable (or herd able) land quickly in farming doubling time terms, industry has been much less expansionary. While eventually industry might displace all farming, farming modes of production can continue to use land for many industry doubling times into an industrial revolution.
Similarly, a new faster economic growth mode might well continue to let old farming and industrial modes of production continue for a great many doubling times of the new mode. If land area is not central to the new mode of production, why expect old land uses to be quickly displaced?
John, at current market rates, yes. But once AIs are cheaper to produce and operate than humans, AIs will increase in number at a faster rate than humans do. Eventually AIs will dominate the economy and the market place will serve AI wants, not human wants.
If AIs have a doubling time of a year, then 50 years after the first one there are ~100,000 times more AIs than humans.
Why would AIs want nature reserves? They would probably want to cause a global ice age because at low temperatures electronics have longer lives and are more efficient. It is more cost effective to sequence plants, animals and bacteria and just store the DNA sequences. Then the surface area can be used to generate electricity.
At some point they would want to get rid of O2 in the atmosphere because it corrodes metal and can result in fires. AIs need to use the surface for cooling. To maximize cooling they would want to remove all greenhouse gases from the atmosphere. Get the temperature down and the atmosphere becomes very clear due to the reduction in water vapor. Take the CO2 out and the temperature goes down. That improves solar cell efficiency, increases land area by lowering sea level. Maybe they would use solar satellites to lower surface temperatures still more.
Maybe they would give humans a few decades notice, maybe not. Humans don't care about climate change. Why would AIs care when humans don't?
Daedalus, photovoltaics can be built on land with soil unsuited for wheat. Such land is, in fact, considerably cheaper per unit area, at current market rates. Furthermore, global climate change will likely destroy the agricultural productivity of a lot of present day farmland, driving up prices on the remainder as surviving meat-persons engage in desperate subterfuge, genocide, etc. AIs might just leave the good farmland alone as if it were a nature preserve.