Hutter on Singularity
Back in July I posted my response to Chalmers’ singularity essay, published in the Journal of Consciousness Studies (JCS) where his paper was published. A paper copy of a JCS issue with thirteen responses recently showed up in my mail, though no JCS electronic copy is yet available. [Added 4Mar: it is now here.] Reading through the responses, the best (besides mine) was by Marcus Hutter.
I didn’t learn much new, but compared to the rest, Hutter is relatively savvy on social issues. He isn’t sure if it is possible to be much more intelligent than a human (as opposed to just thinking faster), but he is sure there is lots of room for improvement overall:
The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. …
When building AIs or tinkering with our virtual selves, we could try out a lot of different goals. … But ultimately we will lose control, and the AGIs themselves will build further AGIs. … Some aspects of this might be independent of the initial goal structure and predictable. Probably this initial vorld is a society of cooperating and competing agents. There will be competition over limited (computational) resources, and those virtuals who have the goal to acquire them will naturally be more successful. … The successful virtuals will spread (in various ways), the others perish, and soon their society will consist mainly of virtuals whose goal is to compete over resources, where hostility will only be limited if this is in the virtuals’ best interest. For instance, current society has replaced war mostly by economic competition. … This world will likely neither be heaven nor hell for the virtuals. They will “like” to fight over resources, and the winners will “enjoy” it, while the losers will “hate” it. …
In the human world, local conflicts and global war is increasingly replaced by economic competition, which might itself be replaced by even more constructive global collaboration, as long as violaters can quickly and effectively (and non-violently?) be eliminated. It is possible that this requires a powerful single (virtual) world government, to give up individual privacy, and to severely limit individual freedom (cf. ant hills or bee hives).
Hutter noted (as have I) that cheap life is valued less:
Unless a global copy protection mechanism is deliberately installed, … copying virtual structures should be as cheap and effortless as it is for software and data today. The only cost is developing the structures in the first place, and the memory to store and the comp to run them. … One consequence … [is] life becoming much more diverse. …
Another consequence should be that life becomes less valuable. … Cheap machines decreased the value of physical labor. … In games, we value our own life and that of our opponents less than real life, … because games can be reset and one can be resurrected. … Why not participate in a dangerous fun activity. … It may be ethically acceptable to freeze, duplicate, slow-down, modify (brain experiments), or even kill (oneself or other) AIs at will, if they are abundant and/or backups are available, just what we are used to doing with software. So laws preventing experimentation with intelligences for moral reasons may not emerge.
Hutter also tried to imagine what such a society would look like from outside:
Imagine an inward explosion, where a fixed amount of matter is transformed into increasingly efficient computers until it becomes computronium. The virtual society like a well-functioning real society will likely evolve and progress, or at least change. Soon the speed of their affairs will make them beyond comprehension for the outsiders. … After a brief period, intelligent interaction between insiders and outsiders becomes impossible. …
Let us now consider outward explosion, where an increasing amount of matter is transformed into computers of fixed efficiency. … Outsiders will soon get into resource competition with the expanding computer world, and being inferior to the virtual intelligences, probably only have the option to flee. This might work for a while, but soon … escape becomes impossible, ending or converting the outsiders’ existence.
When foragers were outside of farmer societies, or farmers outside of industrial cities, change was faster on the inside, and the faster change got the harder it was for outsiders to understand. But there was no sharp boundary when understanding became “impossible.” While farmers were greedy for more land, and displaced foragers on farmable (or herd able) land quickly in farming doubling time terms, industry has been much less expansionary. While eventually industry might displace all farming, farming modes of production can continue to use land for many industry doubling times into an industrial revolution.
Similarly, a new faster economic growth mode might well continue to let old farming and industrial modes of production continue for a great many doubling times of the new mode. If land area is not central to the new mode of production, why expect old land uses to be quickly displaced?