Singularity Economics

The June IEEE Spectrum singularity issue includes my "Economics of the Singularity", which they subtitled  "Stuffed into skyscrapers by the billion, brainy bugbots will be the knowledge workers of the future."  It starts boldly (emphasis added):

Our global economy would stupefy a Roman merchant as much as the Roman economy would have confounded a caveman.  But we would be similarly amazed to see the economy that awaits our grandchildren, for I expect it to follow a societal discontinuity more dramatic than those brought on by the agricultural and industrial revolutions.

A bit too boldly actually; the last draft I sent them was more modest (and shorter):

But we might be similarly amazed to see the economy that awaits our grandchildren, for it may follow a societal discontinuity just as dramatic as the agricultural and industrial revolutions.

About my article, Vernor Vinge says:

In his essay, Hanson focuses on the economics of the singularity. As a result, he produces spectacular insights while avoiding much of the distracting weirdness. And yet weirdness necessarily leaks into the latter part of his discussion. 

The editor’s introduction says

Robin Hanson, an economist, describes a future in which capitalist imperatives and technological capabilities drive each other toward a society that the word weird doesn’t even begin to describe.

Reading the entire issue saddens me.  Opponents rarely connect to clarify or dissect their disagreements – only Vinge directly responds to others.   Each side can tell itself others haven’t understood their main claims and arguments.  What do to?  I can offer to engage others more directly, but I fear I am too low status to be worth the bother.   

Added 16Jun: John Tierney blogs the paper here.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • steven

    I was disappointed to see you describe the industrial revolution and the transition to agriculture as “singularities”. This seems like a radical break from, for example, Vinge’s or Yudkowsky’s sense of the word, and likely to confuse the issue further.

  • http://hanson.gmu.edu Robin Hanson

    Steven, Vinge endores my usage:

    In human history, there have been a number of radical technological changes: the invention of fire, the development of agriculture, the Industrial Revolution. One might reasonably apply the term singularity to these changes. Each has profoundly transformed our world, with consequences that were largely unimagined beforehand.

  • Rob

    Too low status Robin!? Fishing for compliments? You’ll always be my homeboy.

  • http://profile.typekey.com/tim_tyler/ Tim Tyler

    I waded painfully through much of this spectrum issue. One thing I learned – Brooks is another Kevin Warwick:

    “We need not fear our machines because we, as human-machines, will always be a step ahead of them, the machine-machines, because we will adopt the new technologies used to build those machines right into our own heads and bodies.”

    - http://www.spectrum.ieee.org/jun08/6307/3

    I’ve always found the idea that humans will somehow be able to save their
    primitive, evolved wet bits from going up against the wall by fusing with
    machines to be very weird:

    “I am skeptical of the claim that we will become cyborgs in any very interesting sense.

    I think that most of the benefits of being a cyborg could be obtained by having the same technology outside the body – so instead of having a chip implanted in your brain you could have a chip outside your brain and then you could access it – just like you do today with your computer – which saves a lot of cost, and you don’t have to have surgery, and you can upgrade it more easily, and so forth.

    The exception would be people with various disabilities: if you are deaf you might benefit from a chip that enables you to hear, but for healthy people I don’t really see the benefit, except in the longer run when technologies become very good and you could do this easily, but by that time I think it will be an even greater benefit to shed your biology altogether and to perhaps upload yourself.

    So this intermediary stage where we will be having partly biological components in our brains and partly chip implants: I’m just not sure there will be such an interval.

    I think it might be too costly and difficult and risky first, and then it will just become easier to go all the way and become an upload.”

    - Nick Bostrom

  • http://360.yahoo.com/ronzobot Ron Fischer

    Robin, your status isn’t too low, its that your style of delivery isn’t aligned with the “magazine’s” motivation to generate an emotional response in its readers. Cautious, moderated statements in the delivery of BIG NEWS isn’t of interest. But don’t give that up. A willingness to moderate claims and maintain doubt, isn’t that part of how we stay open to new ideas?

  • http://acceleratingfuture.com Michael Anissimov

    I dread reading this.

  • http://acceleratingfuture.com Michael Anissimov

    I also agree with Steven that using the term “singularity” to describe industrial revolutions, etc., is a radical break from prior use, and makes matters more confusing. I think Vinge is being overly diplomatic in the way he lets anyone redefine his coinage in so many different ways. Part of the problem is that he didn’t define the word all that clearly to begin with.

    As I see it, the Singularity is defined as smarter-than-human intelligence. This is just a disagreement on labels, but it’s still significant.

  • http://profile.typekey.com/tim_tyler/ Tim Tyler

    FWIW, Alfred Nordmann’s article was the one that struck me as the biggest waste of time.

  • http://hanson.gmu.edu Robin Hanson

    Tim, I agree with Nick about cyborgs.

    Michael, Vinge has been pretty consistent in using the term to describe a horizon of perception, and in his being open to many possible causes of such a horizon. The problem is that many want to appropriate the term and use it to describe just one particular possible cause of such a horizon.

  • Seth Dickinson

    @Michael Anissimov: the one time I met Vinge he struck me as an extraordinarily kind person, so I wouldn’t be at all surprised if he was ‘overly diplomatic’ about things.

  • poke

    I enjoyed the article. I think the model you use is the most plausible scenario for your needs. I’m always surprised that people in the transhumanist and singularitarian community have such a harsh negative reaction to brain simulation. We’re building biologically-plausible simulations of single neurons, networks of neurons, and larger-scale brain structures every day. People are investigating whole brain simulations. Imaging techniques are improving. This area of research is well-defined, well-funded and making clear and measurable progress. None of this is true ab initio functionalist approaches to AI.

    I doubt actual whole human brain simulations will play a role in future robotics but software systems will likely be based on biologically plausible models. The idea that we know so little about neurobiology that only a complete simulation will be useful goes too far in the opposite direction (but, again, taking whole brain simulations as necessary is probably a plausible approximation for your economic model). We know quite a bit about neurobiology that’s unfortunately glossed over by people’s expectations.

    The current situation is as if Newton had developed his laws of motion and everyone ignored them because he hadn’t yet explained impetus and they don’t fit our heliocentric intuitions. We shouldn’t expect explanations of “consciousness” and “mental processes” or other concepts in folk psychology and, importantly, shouldn’t use them as yardsticks to measure progress in neurobiology. If you look at biological models of motor control, sensory perception, memory, etc, they’re highly developed and we’re making good progress.

  • http://hanson.gmu.edu Robin Hanson

    Poke, yes I think my conclusions would apply robustly to dominant systems that are a mixture of designed and simulated aspects, as long as design isn’t too large a fraction of the mixture.

  • http://profile.typekey.com/tim_tyler/ Tim Tyler

    Bizarrely, Robin’s article features lots of insect-scale AIs. I think it is more realistic to think that many early AIs will be around human scale – so they fit in with our society – or much larger – Google-scale AIs. Later AIs will probably be larger still – with mobile robots on the scale of the “transformers” movie, and eventually, static brains the size of planets.

    This difference looks like a side effect of our difference over the role of the human brain and uploads. IMO, AI is extremely unlikely to be based closely on human brains. Sure, there will be some beetle-size brains – but they won’t be the focus of society, any more than beetles are today.

  • http://hanson.gmu.edu Robin Hanson

    Tim, differing from humans in physical size and in mental size would both cause troubles fitting into standard human roles and relations. But if we must choose one or the other I’m guessing it will be easier to have a human mental size and a non-human physical size. It would be easier to manage “sectors” where all physical things are scaled by a certain factor, than to have sectors where all mental things are scaled by some factor.

  • http://profile.typekey.com/tim_tyler/ Tim Tyler

    So, you conclude that tiny, feeble-minded creatures will dominate!?! IMHO, that’s not remotely what has happened so far in the oceans, or on land. I see no basis for such an expectation.

    Meteor strikes aside, biology seems to me to favour entities with large brains – and it doesn’t look to me as though there is an upper limit in sight.

  • billswift

    @Tyler “biology seems to me to favour entities with large brains”

    There are more bacteria in a person’s mouth than the total number of people who have ever lived. “Bacteria make up some ten percent of the dry weight of mammals.” A paraphrase and a quote from Margulis & Schwartz, Five Kingdoms, 3rd ed, p.44. I also remember reading somewhere, though I cannot find the quote, that the total mass of bacteria on the earth is greater than all other living things.

    Also, consider Hall’s Utility Fog and the gray goo catastrophe, in each scenario there are tiny, unintelligent agents with substantial effectiveness. While I think intelligence is important in itself; it is even more important to prevent/work-around potential threats which may make up for a lack of intelligence with other characteristics. Looking at it biologically (evolution-wise) the greatest threat could be rapid reproduction; more likely it will be human greed/power-seeking combined with the ability to rapidly create many instrumentalities (e.g. Brin’s tiny winged cameras coupled with an ability to assassinate any one their controllers choose).

    The greatest threat I see in the future is for anyone to develop a substantial lead in IA and or nanotechnology since enough of a lead would basically place everyone else at their mercy. Contra-Eliezer, I think substantial advances in intelligence amplification will lead up to GAI, but will provide many of the same risks.

  • http://profile.typekey.com/SightedWatchmaker/ SightedWatchmaker

    I wonder if extremely intelligent people have a harder time envisioning the power of smarter than human intelligence because they have relatively little experience interacting with people smarter than themselves.

  • http://profile.typekey.com/tim_tyler/ Tim Tyler

    Re: “bacterial biomass” – there may well be many tiny AIs. But what I am saying is that they probably won’t be the dominant lifeform. A gray goo catastrophe wiping out the dominant organisms is an unlikely outcome – since it depends on civilisation (including AIs) totally screwing up. An advanced virus may someday rip through all undefended primitive lifeforms – but life will have moved on by then.

    Yes, sure inequalities, will be bigger in the future. We have huge inequalities today – between companies. Such inequalities usually threaten the smaller entities – not the whole of civilisation. Sufficiently large inequalities may actually turn out to be a blessing – so that we get a walkover – rather than a potentially-bloody battle.

    The “all other living things” claim is from “Life’s Grandeur”.

  • Will Pearson

    I found the article plausible in most details though I think that the AIs will be sufficiently understood (even if directly copied from humans) to have significant design pressure towards being completely ego-less. So much so that we won’t consider them individual intentional actors, but only understandable from the point of view of them being part of a larger intentional agent, including humans, to start with at least.

  • http://users.ox.ac.uk/~ball2568/ Pablo Stafforini

    Robin, you write: “In the roughly 2 million years our ancestors lived as hunters and gatherers, the population rose from about 10,000 protohumans to about 4 million modern humans. If, as we believe, the growth pattern during this era was fairly steady, then the population must have doubled about every quarter million years, on average.”

    These claims are contradicted by the evidence now available. A recent study in the American Journal of Human Genetics estimated that the population of humans numbered fewer than 2,000 individuals as recently as 70,000 years ago.

  • http://hanson.gmu.edu Robin Hanson

    Pablo, when I write on this at more length, I say I’m talking about the size of the ecological niche humans had carved out. Within that niche some groups often take over against other groups, and so the number of our ancestors at a given time may be much less than the size of the niche at that time.

  • http://profile.typekey.com/tim_tyler/ Tim Tyler

    Re: “2,000 individuals as recently as 70,000 years ago” – that was misreporting:

    http://news.slashdot.org/comments.pl?sid=533790&cid=23192160

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    The version I heard was that the genetic bottleneck was probably due to the Toba supereruption in ~73,000 BCE. This was told to me by Milan Cirkovic. Searching for a reference produced:

    Ambrose, S.H. 1998. Late Pleistocene human population bottlenecks, volcanic winter, and differentiation of modern humans. Journal of Human Evolution 34:623-651.

  • http://users.ox.ac.uk/~ball2568/ Pablo Stafforini

    Tim, you are right. It wasn’t CNN’s fault, though: the original National Geographic press release itself mentioned the 2,000 figure. Here’s an instructive post by John Hawks on the issue.

  • http://shagbark.livejournal.com Phil Goetz

    I am puzzled that the flattest spot on Delong’s growth chart covers, almost exactly, the golden age of Greece through the end of the Roman empire. I checked it versus estimates of world population at
    http://www.census.gov/ipc/www/worldhis.html, since I expected that graph to track population. It does – except for the period 500BC to 500AD, which is a 1000-year population-doubling period, like many before it.

  • Mark Crosby

    Robin Hanson’s “Economics of the Singularity, just published in the IEEE’s SPECTRUM ONLINE suggests that “Wages would fall so far that most humans could not live on them”. This mode seems to predict the extinction of humans-as-we-know-them and does absolutely nothing to overcome bias (unless, it leads you to decide that singularities are unlikely in a continuum ;)

    Consider my own niche: 10 years ago I was the only one in the world who knew how to grease the gears and keep this particular econometric subsystem running. Then, our organization embarked on an effort to clone my skills into a more automated system, with a more relational database, with ‘friendly’, object-oriented use cases – so that the process was understandable to AND relevant to a larger set of ‘consumers’ (in terms of a bureaucracy that could make their living off of this process ;) But, while the new system certainly improves the functionality, that improvement is hardly equal to the increase in labor needed to coordinate the more complex system! Each efficiency advantage seems to create new problems that take more effort to resolve.

    Biosemiotician Stan Salthe talks about extropy being achieved through the refinement of paths for harvesting the energy gradient. But, you have not shown that copying brains into machines will increase productivity until you deal with the synchronization issues. These, I’ve found, in the automation of the system I work on, to be decidedly non-trivial.

    Actually, this probably supports your case: Each input-constrained brain copy is not necessarily free to go off and innovate on whatever it likes, nor does it necessarily have the variety of stimulatory modes available to an ideally autonomous biological being ;) So, bottom line speculation, each machine-mind may require a correlated biological-being. In short, wetter & wetter meta-minds may be a necessary counter-balance to what some ‘radical philosophers’ refer to as the “Noise, Pestilence & Darkness” of the increasingly abstract machine. In which case, this Singularity is not necessarily the eugenicist’s wet dream. One of my assumptions is that there is an affective ‘brain’, alongside the rational ‘brain’, in biological beings. Gerd Gigerenzer’s GUT FEELINGS: THE INTELLIGENCE OF THE UNCONSCIOUS may be useful here..

    I’m afraid my idea of superationality is constrained by the image of Greg Egan’s superintelligence that loses itself in a metaphysical CAVE akin to a fractal replication of obsessive compulsive calculation of prime numbers (an apt metaphor for hypercapitalism / obvious-advantages-of increased productivity ?) In short, if I may babble some half-baked econ-101, you have not shown how exponential increases in production functions would be matched by corresponding increases in demand functions. Over the long run, I have no doubt that this will occur – it’s the short-run cost to conscious beings and their stratified entropic niches that I worry about (the anti-eugenicist’s nightmare ;) THIS is the ‘bias’ (or danger) that must, somehow, be overcome (at least for the manufacturing of convincing arguments prior to the Singularity ;)

    - Mark (suspecting that “life as a robot” still raises issues of Bucky Fuller’s “pirates of the high seize” with which I previously taunted you (and maybe even perturbed the undaunted wave-front of the 15-year-old Eliezer ;))

  • http://hanson.gmu.edu Robin Hanson

    Phil, historians disagree about population estimates for that period.

  • http://shagbark.livejournal.com Phil Goetz

    Robin, my surprise is because that period – and most especially 500BC-300BC – is generally regarded as the greatest period of growth, in terms of knowledge and civilization, of all history, up until perhaps the 17th century. It should show a steep rise, not a flattening out, unless economic growth is negatively correlated with cultural and technological growth. Something is deeply wrong with either that part of the chart, or with our understanding of how economics and civilization interact.

  • Douglas Knight

    Phil,
    that it is regarded as the greatest period of growth is rather Euro-centric, while the population estimates are not. Not that looking just at Greece or the Mediterranean will solve that problem.

  • michael vassar

    My impression is that this is not a Euro-centric version of history. Maybe stretch to 600 BC but that’s pretty much the max period of change everywhere prior to 1700. Maybe its Euro-centric not to note that globally, but not in Europe, there were other periods of less rapid but still rapid change between say 850 and 1150 and between say 2100 BC and 1600 BC?

    In any event, I am pretty strongly suspicious of all pre-modern population numbers. We don’t even have confident estimates of the population of contemporary Afghanistan to within a factor of two! Native American population uncertainty is more like a factor of 30. Historical populations look likely to me to be largely compromises between people who want to guess by summing up the land areas we know were heavily cultivated and fairly casually estimating the efficacy of the agricultural techniques available and people who simply want to assert long-term progress and extrapolate backwards globally simple trends that work in Europe from 1500 to 1700.

  • Tim Tyler

    Ben G – on the bizarre idea of insect AIs:

    “This, I guess, is one of the oddest things about the digital minds in “Diaspora”. After all those centuries, it’s still optimal to have computer memory partitioned off into minds roughly the size of an individual human mind? How come entities with the memory & brain-power of 50,000 humans weren’t experimented with, and didn’t become dominant?”

    - http://www.sl4.org/archive/0101/0481.html

    The idea may make sense if you are crafting a novel for 20th century human readers – so they can identify with the characters – but I can’t see how or why anyone would take it seriously as futurism.

    God may love beetles – but he also made whales – and Google.

  • http://hanson.gmu.edu Robin Hanson

    Tim, the issue is timescale. Diaspora is set centuries later, while my forecasts are for the early period after uploads are possible. Yes it is unlikely that the ultimate optimal mind size is human, but the minds would start out human size with coordination gains to interacting with minds of a similar size.

  • Tim Tyler

    Disagree. The first AIs worthy of the name will most likely be built by Google/NSA/DARPA or similar – and they will probably be huge entities which play on a global scale.

    Uploading is irrelevant. AIs will come a long time before that becomes possible – and once you have AI, uploading becomes a pretty pointless exercise. There are easier ways to simulate a human, if you really want to do that for some reason.

    Similarly practically nobody builds mechanical birds to fly items about. We have aeroplanes and helicopters for that. The funding fell out of the drive to make mechanical birds a long time ago.

    The only reason to discuss uploading is as a proof-of-concept of the idea of AI not being too far away these days. The idea of uploading as an implementation plan is way out there: surely nobody in their right mind would deliberately create such an unmaintainable, incomprehensible mess for any practical purpose.

  • Tim Tyler

    Writing is a possible cause of any 4,000-5,000 BC growth spurt.

    The timing is not really in favour of farming – since that arose more like 10,000 – 15,000 years ago. Also, logically, the ability to transmit ideas reliably across generations is really the more significant evolutionary development.

    http://en.wikipedia.org/wiki/Tărtăria_tablets

  • Ajay

    I finally got around to finishing up this piece from the Spectrum and I have to say the third page with all the predictions is incredibly stupid. I cannot imagine a larger collection of stupid statements about the robotic future from a smart guy. Just to name a few examples, what possible gain would artificial AIs have from inhabiting mm-size bodies, rather than solely existing online? Why would “copying… make robot immortality feasible in principle, [but] few robots would be able to afford it” when copying is already so cheap today? The next sentence about how “few robots would be able to afford robot versions of human children” I cannot even parse to make any sense. The future will be a highly complex interaction of so many effects that it’s extremely hard to have any idea today how it will all play out. However, the predictions that Robin makes are so dumb and slipshod that it’s easy to see that it will not play out that way. These predictions are best viewed as insight into Robin’s haphazard understanding of economics and technology more than anything else.

  • Pingback: Overcoming Bias : Key Disputed Values

  • cesium62

    Superhuman intelligence has been experimented with. The moon shot in the 60′s is a fabulous example of super-human intelligence. Hundreds of thousands to millions of people organized and collaborated to achieve a goal that no one of them could come close to alone.

    The need to organize that complexity did not swamp the benefits of the increased complexity.

    To Tim’s point, arguably Google has built the first AI worthy of that name. Last year, on superbowl sunday while wondering what time the game starts, I started to ask google “what…” and it figured out that the question I probably wanted to ask was “what time does the superbowl start”.

    Up to this point, understanding language has really been what AI is all about. So now we will probably redefine AI so that computers actually have to show creativity in order to be considered intelligent.

  • Peter Scott

    The reason Google was able to predict that you wanted to know when the Superbowl started was not because of some generally-applicable intelligence; it was because Google noticed that a lot of the people who started out typing “what” on that day went on to type “time does the superbowl start”. That trained a Hidden Markov Model somewhere, and when you came along, it had a pretty good prediction going. The math here is a lot easier than training a computer to play chess, or most of the other classic AI feats that looked more intelligent than they were.

    No understanding of language was necessary for this.

    • cesium62

      If it walks like a duck and quacks like a duck, we normally call it a duck. It doesn’t matter if it’s hidden Markov models or a man in a chinese box. Predicting the future correctly is a sign of intelligence no matter how it’s implemented. Communicating with and understanding people is a sign of intelligence no matter how it’s implemented.

  • Pingback: Overcoming Bias : Robot Econ Primer

  • Alexander Gabriel

    I think this is impressive. But I am starting to question the locations of different “singularities.” So for example, maybe we just view the human species like some extreme environmentalists do–as just one very successful animal species. Then maybe there are no singularities because humanity and our GDP increases are no more significant than dinosaur population increases.

    Or maybe more likely, we do acknowledge humans are special but deny that any change involving “the end of the human era” as Vinge says is predictable based on human GDP, so that there is only one singularity of the evolution of humanity. This might make sense in that human evolution was the one of your singularities that totally changed the optimization process, formerly DNA exchange, and afterwards exchanging thoughts using words. AI might cause another such leap.