Here Be Dragons

In his new book Here Be Dragons: Science, technology, and the future of humanity, Olle Haggstrom mostly discusses abstract and philosophical issues. But at one point in the book he engages the more specific forecasts I discuss in my upcoming book. So let me quote him and offer a few responses:

Once successful whole-brain emulation has been accomplished, it might not be long before it becomes widely available and widely-used. This bring us to question (4) – what will society be like when uploading is widely available? Most advocates of an uploaded posthuman existence, such as Kurzweil and Goertzel, point at the unlimited possibilities for an unimaginably (to us) riche and wonderful life in ditto virtual realities. One researcher stands out from the rest in actually applying economic theory and social science to attempt to sketch how a society of uploads will turn out is the American economist Robin Hanson, beginning in a 1994 paper, continuing with a series of posts on his extraordinary blog Overcoming Bias, and summarizing his findings (so far) in a chapter in Intelligence Unbound and in an upcoming book.

Two basic assumptions for Hanson’s social theory of uploads are
(i) that whole-brain emulation is achieved mostly by brute force, with relatively little scientific understanding of how thoughts and other high-level phenomena supervene on the lower-level processes that are simulated, and
(ii) that current trends of hardware costs decreasing at a fast exponential rate will continue (if not indefinitely then at least far into the era he describes).

Actually, I just need to assume that at some point the hardware cost is low enough to make uploads substantially cheaper than human workers. I don’t need to make assumptions about rates at which hardware costs fall.

Assumption (i) prevents us from boosting the emulated minds to superhuman intelligence levels, other than in terms of speed, by transferring the mot faster hardware. Assumption (ii) opens up the possibility for quickly populating the world with billions and then trillions of uploaded minds, which is in fact what Hanson predicts will happen. ..

Actually, population increases quickly mainly because factories can crank out an amount of hardware equal to their own economic value in a short time – months or less.

Decreases in hardware costs will push down wages. .. This will send society to the classical Malthusian trap in which population will grown until it is hit by starvation (uploaded minds will not need food, of course, but things like energy, CPU time and disk space). ..

There are many exotica in Hanson’s future. One is that uploads can fun on different hardware and thus at different speeds, depending on circumstances. .. Even more exotic is the idea that most work will be done by short-lived so-called spurs, copied from a template upload to work for, say, a few hours and then be terminated (i.e., die). .. Will they not revolt? The question has been asked, but Hanson maintains that “when life is cheap, death is cheap.”

First, spurs could retire to a much slower speed instead of ending. Second, just before an em considers whether to split off a spur copy for a task, that em can ask itself if it would be willing to do that assigned task if it found itself a few seconds later to be the spur. Ems should quickly learn to reliable estimate their own willingness, so they just won’t split off spurs if they estimated a high chance that the spur would become troublesome. Maybe today we find it hard to estimate such things, but they’d know their world well so it would an easy question for them. So I just can’t see spur rebellion as a big practical problem, any more than we have a big problem planning to go to work for the day and then suddenly going to the movies instead.

The future outlined in Hanson’s theory of uploaded minds may seem dystopian .. but Hanson does not accept this label, and his main retorts seem to be twofold. First, population numbers will be huge, which is good if we accept that the value of a future should be measured .. by the total amount of well-being, which in a huge population can be very large even if each individual has only a modest positive level of well-being. Second, the trillions of short-lived uploaded minds working hard for their subsistence right near starvation level can be made to enjoy themselves, e.g., by cheap artificial stimulation of their pleasure center.

I don’t think I’ve ever talked about “cheap artificial stimulation of their pleasure center.” I instead say that most ems work and leisure in virtual worlds of spectacular quality, and that ems need never experience hunger, disease, or intense pain, nor ever see, hear, feel, or taste grime or anything ugly or disgusting. Yes they’d work most of the time but their jobs would be mentally challenging, they’d be selected for being very good at their jobs, and people can find deep fulfillment in such modes. We are very culturally plastic, and em culture would promote finding value and fulfillment in typical em lives. In addition, I estimate that most humans who have ever lived have had lives worth living, in part because of this cultural plasticity.

Then there’s the issue of whether and to what extent we should view Hanson’s analysis as a trustworthy prediction of what will actually happen. A healthy load of skepticism seems appropriate. .. It also seems that he works so far outside of the comfort zones of where economic theory has been tested empirically, and uses so many explicit and implicit assumptions that are open to questioning, that his scenarios need to be taken with a grain of salt (or a full bushel).

You could say this about any theoretical analysis of anything not yet seen. All theory requires you to make assumptions, and all assumptions are open to questioning. Perhaps my case is worse than others, but the above certainly doesn’t show that to be the case.

One obvious issue to consider is whether society following a breakthrough in the technology will be better or worse than society without such a breakthrough. The utopias hinted at by, e.g., Kurzweil and Goertzel seem pretty good, whereas Hanson’s Malthusian scenario looks rather less appealing.

But Kurzweil and Goertzel offer inspiring visions, not hard-headed social science analysis. Of course that will sound better.

GD Star Rating
Tagged as:
Trackback URL:
  • lump1

    I’m not optimistic about Haggstrom’s book based on these passages. Rather than engaging with the ideas he glosses, he just seems to blurt out some kneejerk reactions. A college freshman doing a report on the topic would likely say the same: Kruzweil is optimistic; Hanson’s thing seems dystopian. Sucks to be a spur, omg, why don’t they revolt? Anyway, this is all totally speculative, because none of this has actually happened, so who’s to say?!? BTW, I totally wrote this the morning it was due.

  • You could say this about any theoretical analysis of anything not yet seen.

    Mightn’t there be reasons for rejecting a theory based on “weak clues.” [I think of a legal analogy: evidence is excluded if it is excessively weak.]

    [I’m not completely sure I have the correct phrase. Is it weak clues? Doesn’t quite sound right.]

    • You could argue that pro arguments are few and weak. But then you should say which ones you think are weak and why.

  • Fleshy506

    So your scenario depends on the assumption that ems as smart as the finest human scientists could spend the equivalent of millions, billions, trillions of person-hours on the problem of finding more cost-effective ways of using the hardware they’re running on, and still not come up with anything radically different from just using it to run lots and lots of ems?

    • lump1

      In the scenario, there isn’t some tyrannical CPU allocator that dictates what the world’s computers are to run. Each computer presumably has an owner, and that owner runs whatever brings the highest returns. Hosting ems that pay “rent” will probably be lucrative, since they have earning power and a will to live (consume cycles). Other routines might be lucrative too – like physics simulations, game servers, etc.

      I imagine the computer owner will have decisions to make that are similar to those of a land owner. Do I use my land for agriculture, industry, human housing, or recreation? It depends on which demands are unmet at the time. For computers, the demand for cycles from ems would not reach a saturation point: No matter how many computers there happen to be, on any *new* computer that’s “vacant”, one could instantaneously create ems (by copying files) that would pay well to fill that vacancy – probably well enough to justify the cost of building and maintaining the computer. That’s how I imagine that global computing would be dominated by running ems.

      • Fleshy506

        Right. I’m just thinking that entities that own large quantities of computing power would want to devote some of it to figuring out how to use the rest of it most profitably. Presumably, as soon as someone discovers a more efficient way of doing artificial general intelligence than running ems, market forces would cause that alternative approach to displace ems as the dominant way of using computers to do knowledge work.

        This could happen in an incremental fashion, with AGI systems getting less and less similar to human minds over time. The question is, How long could the em economy develop along the lines Hanson predicts before ems are mostly displaced by something completely different? That would depend on how hard it is to fill in the gap between “scientific knowledge necessary to build ems cheaply” and “scientific knowledge necessary to build altogether different and better AGI technology.”

    • I assume only that it there will be a substantial era I can describe before research that develops things radically different makes my forecasts useless. It requires a quite radical increase in innovation rates for this not to happen.

      • Fleshy506

        Okay. I guess that’s conceivable. It seems implausible, but I don’t have any relevant expertise with which to back up my gut reaction.

      • tio

        Do you in the book narrow down “substantial era”? Is it a matter of days? Weeks? Months? Years?

      • Less than two years of real clock time, but subjectively that could be millennia for typical speed ems.

      • Joshua Brulé

        If I’m (un?)lucky enough to still be alive in meatspace at that point, that’s going to be a couple of very weird years…

      • Sam

        We could ban ems and AGI. A global treaty is doable.

  • It seems a ritual among academicians to lavishly praise a writer as prelude to dismissing him.

  • zarzuelazen

    It’s always interesting and worth-while to see what analysis about the future an extraordinary smart and knowledgeable person such as Hanson can produce.

    I don’t think it’ll happen, since I come down on the side the FOOMers and the grand theory of intelligence. I now believe AGI will win the race with Ems by a wide margin…so my prediction is that AGI comes first and the Em scenario isn’t going to happen.

    I admit I don’t have much technical skill, but there’s not a man alive today that has my philosophic skill 😉 Personally, I am in no doubt whatsoever that AGI has been solved at the in-principle level.

    The problem of logical omniscience in Bayesian epistemology is solved by defining a measure of ‘coherence’ to deal with the meta-uncertainty of logical reasoning. So ‘coherence’ is more fundamental than ‘probability’. Similarly, the correct measure of value is a ‘complexity’ measure, not ‘utility’.

    My prediction: AGI will not take more than 5 years from the date of this post.

    • Care to offer betting odds on your five year prediction?

      • zarzuelazen

        Ok, I must admit to a few slight doubts 😉
        As a believer in FOOM though, it wouldn’t make sense for me to bet, since extreme rewards or penalties would quickly accue to me if I was correct.
        For the 5-year time frame I’m saying AGI is a 50% chance (1:1 odds); for a 10 year time-frame I’m saying it’s a 90% chance (1:9 odds).

      • zarzuelazen

        Further evidence for AGI-soon; the views of Jurgen Schmidhuber, the world’s leading AI
        researcher today and the father of the most widely used types of ‘deep learning’ and neural networks:

        “Many think that intelligence is this awesome, infinitely complex thing. I think it is just the product of a few principles that will be considered very simple in hindsight, so simple that even kids will be able to understand and build intelligent, continually learning, more and more general problem solvers.
        Partial justification of this belief: (a) there already exist blueprints of universal problem solvers developed in my lab, in the new millennium, which are theoretically optimal in some abstract sense although they consist of just a few formulas (b) The principles of our less universal, but still rather
        general, very practical, program-learning recurrent neural networks can also be
        described by just a few lines of pseudo-code

        (Source: ‘I am Jurgen Schmidhuber, AMA, 2015)

        The reason you see so many really complicated ideas posted in places like ‘Less Wrong’ is
        because those people are ‘signaling smartness’, not solving the AGI problems. All that complicated stuff is just covering up the fact that they don’t have a clue 😀

      • zarzuelazen

        The recent breakthrough of Google’s ‘GO’ playing AI that has beaten the European Go champion is further evidence for AGI soon.
        Although the machine relies on some GO-specfic tricks ( for planning), there are also general-purpose techniques being deployed that would be relevant to AGI.
        First, the architecture consisted of a ‘3-level split’ in terms of inference at 3 different levels of abstraction. This confirms to me that just 3 levels of abstraction in inference are all that’s needed for AGI!
        1st level: Domain-specific knowledge (short-term tactics). Used database of 30 million GO moves to train ‘value network’ of best moves . General purpose deep learning.
        2nd level: Learning (medium-term tactics) ‘policy network’ trained to select moves. General purpose deep-learning.
        3rd level: Planning (long-term strategy). Used a GO-specific trick here (Monte-Carlo Tree Search) to select strategy. Not general-purpose, so it’s the weak link in the architecture (but still very effective for GO playing).
        See the 3 levels? It confirms that In general terms, all AGIs will operate with a hierarchial architecture that uses these 3 levels:
        1st level: Domain-knowledge unit (values)
        2nd level: Pattern-recognition unit (policies)
        3rd level: Planning/Concept-learning unit(signals)
        Neural networks (deep learning) can be said to have ‘solved’ levels 1 and 2 (at least for problems where there’s lots of training data).
        So we are really only awaiting a solution to the 3rd level (planning/concept-learning unit), for which no general solution has yet been found.
        Everything comes in 3s all the way down. The key to everything is threeness!

      • zarzuelazen

        I can see a direct match between the 3 levels of AlphaGo, 3 types of inference and 3 general properties of the universe! 😀

        Mathematics >>> Heuristic search
        Physics >>> Policy network
        Mind >>>> Value network

        Mathematics can be considered the ‘heuristic search’ mechanism of reality; it is the means through which reality explores all branches of ‘possible worlds’.

        Physics can be considered a ‘policy network’ which decides what possible worlds should become ‘real’ (i.e have their pathways explored in depth)

        Finally, mind can be considered a ‘value network’ ; it is the ‘evaluation function’ of reality, which prunes the tree of possible worlds to the more limited set of ‘real worlds’ – it ‘terminates’ the search function.

        Direct match to 3 optimal types of inference for each of the 3 functions!

        Seach function >> Categorization >>
        Coherence values (complex numbers)

        Policy network >>> Pattern recognition >>> Bayesian probability values (0-1)

        Value network >>> Symbolic reasoning >>> Boolean algebra (T/F values)

  • Robin, I apologize for the misrepresentations of your arguments. They were unintentional but sloppy on a level way below the standard I aspire to. The statement about assuming exponential decay of hardware costs is an embarrassing mistake. Regarding artificial stimulation of pleasure centers, I must simply have cited from my own mutated memory.

    • Ely Spears

      Don’t be so hard on yourself. What you’ve written is an impressively concise summary of a point of view that is very hard to grasp even for very intelligent people. There are very few people who wouldn’t have made vastly more serious mistakes. And by writing it, even with some imperfections, you’ve further stimulated follow-on discussion allowing people to be more in alignment about these topics, what has been written about them, who has said what, and so on.

    • Books are long things – I will not criticize a book for a few errors until I can do a book with much fewer errors.

      • Hopefully, when you “do a book”, you won’t include language like “much fewer”.

  • These discussions are so funny. We don’t know the difference between genius brains and brains in comas, or sane brains and insane brains, but one sure way to get the latter is to separate the brain from sensory inputs, or deprive it of a nurturing human upbringing. You folks ought to read Pinker’s “The Blank Slate”, but it would spoil all the fun.

  • Pingback: The Word From The Dark Side, January 22nd, 2016 | SovietMen()

  • Christopher Galias

    Is it worth reading in general?