Back in July I posted my response to Chalmers’ singularity essay, published in the Journal of Consciousness Studies (JCS) where his paper was published. A paper copy of a JCS issue with thirteen responses recently showed up in my mail, though no JCS electronic copy is yet available. [
John, at current market rates, yes. But once AIs are cheaper to produce and operate than humans, AIs will increase in number at a faster rate than humans do. Eventually AIs will dominate the economy and the market place will serve AI wants, not human wants.
If AIs have a doubling time of a year, then 50 years after the first one there are ~100,000 times more AIs than humans.
Why would AIs want nature reserves? They would probably want to cause a global ice age because at low temperatures electronics have longer lives and are more efficient. It is more cost effective to sequence plants, animals and bacteria and just store the DNA sequences. Then the surface area can be used to generate electricity.
At some point they would want to get rid of O2 in the atmosphere because it corrodes metal and can result in fires. AIs need to use the surface for cooling. To maximize cooling they would want to remove all greenhouse gases from the atmosphere. Get the temperature down and the atmosphere becomes very clear due to the reduction in water vapor. Take the CO2 out and the temperature goes down. That improves solar cell efficiency, increases land area by lowering sea level. Maybe they would use solar satellites to lower surface temperatures still more.
Maybe they would give humans a few decades notice, maybe not. Humans don't care about climate change. Why would AIs care when humans don't?
Daedalus, photovoltaics can be built on land with soil unsuited for wheat. Such land is, in fact, considerably cheaper per unit area, at current market rates. Furthermore, global climate change will likely destroy the agricultural productivity of a lot of present day farmland, driving up prices on the remainder as surviving meat-persons engage in desperate subterfuge, genocide, etc. AIs might just leave the good farmland alone as if it were a nature preserve.
>the number of economic doublings between now and some future time seems to me a good proxy for the difficulty of controlling such future times from our current time.
That sounds plausible at first, but becomes less plausible when one considers that in some endeavors, it is routine to be around 99% confident of the outcome of a casual chain 10^16 cause-and-effects long: in particular, suppose you run a computer operating at one GHz on a computation that last 6 months. If the purpose of the computation was to factor a big number and if the people programming and operating the computer are experienced in such computations, it is possible to be 99% certain that the outputs at the end of the computation 6 months from now really will be factors of the number the computation started with (which in this case is the aspect or property of the outcome over which control was desired).
Eliezer Yudkowsky has already written about how hoping that A.Is are "smart enough to realize my ethics are correct" is foolish. A.Is will do what they are programmed to do.
This blog has also discussed co-operation (Hanson tends to use the term "coordination") in evolution and for ems. And it seems foolish to attribute a focus on competition to being American. The most prominent popular text with that focus is Richard Dawkins' "The Selfish Gene", though he remarked that it could have just as easily been called "The Cooperative Gene".
If humans are stupid enough to create AIs programmed to desire cutthroat capitalism than it is the humans, not the AIs, who are responsible for the destruction of worlds. I'm hoping AIs will be so smart they'll quickly figure out that "he who dies with the most toys, wins" is merely a self-defeating illusion created by idiotic humans who have the accumulation of wealth in order to attract females hardwired into their dna.
Now this blog is called Overcoming Bias, and I feel that there is one piece of bias at work here that needs to be overcome: it's easy to make competition the most important aspect of evolution when one lives in a society that has an almost pathological obsession with competition. But competition isn't the only force at work in evolution. Symbiosis and cooperation are equally important. What a technological singularity would look like is by its very definition, unpredictable. AIs may just as well turn into electronic Buddhas as they might turn into economic resource warlords.
LW discussion of the papers in that issue: http://lesswrong.com/lw/aif...
Farms have not been replaced by factories because factory workers and owners like to eat food. As the value of land for factories goes up, the value of land for farming will at least be at the level that sufficient food will be produced for the factory worker and owners.
AI does not eat food. The rise of AI will decouple the value of food from the value created by the vast majority of thinking agents in society. This can easily lead to the costs of food rising to astronomical levels and humans starving. Think of it from the AIs perspective, if the same resources could support either a million AIs or one human, wouldn't it be immoral to give the resources to the human?
Sure. Also, the word is "counsel," not "council."
Saying that Chalmers "has extensively written about it in general" and "I essentially agree with his assessments" is suggestive of a broader agreement. I would council exercising caution if broadly agreeing with Chalmers. The "singularity" article might not have been too bad - but the realms of previous material about zombies and materialism was pretty bonkers - and is fairly widely derided by those involved with machine intellligence.
Except there are economies of scale and minimum costs to achieve lunar derived power satellites. If property is still controlled by individuals, those individuals will maximize their economic gain via monopoly control of access to what their property produces.
An individual AI might not be able to afford a power satellite, but that AI might be able to afford 0.1 hectare of land and to cover it with photovoltaics. That would produce ~25,000 kwhr per year.
Wheat yield is ~ 2,243 kg/hectare. A kg of wheat has ~15,000 kJ, a 2000 calorie a day diet is 8,400 kJ. 2000 calories of wheat would take about 0.1 hectare. Much of the US receives an annual average of 4-5 kwhr/m2 per day. Solar photo voltaics are ~15% efficient. 0.1 hectare is 1,000 m2. A landowner can grow 1 years worth of food for a human, or can produce ~25,000 kwhrs.
If AIs can bid up the price of power such that 25,000 kwhr of electricity is worth more than 250 kg of wheat, then the Earth will be paved over to supply that market.
Electric power is ~$25/Mwhr. Wheat is ~$0.26 per kg. 0.1 hectare of land can produce $625 of electric power, or $65 worth of wheat.
Free market economics dictates that the Earth will be paved over.
Chalmer's consciousness claims in that article are functionalist ones: that AIs functionally similar to humans will also be conscious, and that uploading can be a "good enough" form of survival or continuity of identity. One can agree with all that from a physicalist perspective, without committing oneself to the dualism that Chalmers advocates elsewhere.
Hutter calls Chalmers a "consciousness expert" and says he basically agrees with him. That seems very strange. Most of the other people in the field I have encountered side with Hofstadter and Dennett - and think that Chalmers has been barking up the wrong tree.
I'm disappointed that Hutter endorsed the daft "singularity" terminology - despite the phenomenon failing to conform to his own definition of the term.
For beings not tied to the biosphere, making solar power satellites out of lunar resources makes much more economic sense than "paving the Earth". Eventually, all of the planets will probably be broken up for "computronium", but that is the distant future and will likely not be determined by how AI begins.
Not necessarily, if the new mode uses different, or at least mostly different, resources. Agriculture, for example, uses a lot of land area, industry uses land area, but much less land for its productivity; and industrial use of area mostly goes down as its efficiency and productivity go up.
The discussion of continued human existence on the periphery seems beside the point from the perspective of most who are concerned with AI risk.
From a total utilitarian point of view, the fate of humans at the periphery is dwarfed by the risk of lost utility in a Hansonian competitive scenario (wasted resources like the burnt cosmic commons, reallocation of resource from conscious experience to defense and offense, selection for low-value entities, etc).
But the empirical claim that the total utilitarian view is the sole perspective of most concerned with AI risk is false. That is a major consideration for many, but even those who take this seriously are rarely blase about their own survival and that of all their intimates, associates, and contemporaries.