Trillions At War

The most breathtaking example of colony allegiance in the ant world is that of the Linepithema humile ant. Though native to Argentina, it has spread to many other parts of the world by hitching rides in human cargo. In California the biggest of these “supercolonies” ranges from San Francisco to the Mexican border and may contain a trillion individuals, united throughout by the same “national” identity. Each month millions of Argentine ants die along battlefronts that extend for miles around San Diego, where clashes occur with three other colonies in wars that may have been going on since the species arrived in the state a century ago. The Lanchester square law [of combat] applies with a vengeance in these battles. Cheap, tiny and constantly being replaced by an inexhaustible supply of reinforcements as they fall, Argentine workers reach densities of a few million in the average suburban yard. By vastly outnumbering whatever native species they encounter, the supercolonies control absolute territories, killing every competitor they contact. (more)

Shades of our future, as someday we will hopefully have quadrillions of descendants, and alas they will likely sometimes go to war.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • Jeffrey Soreff

    The key word in the quote is “cheap”.
    The key to preventing a nightmare like that is to extend and strengthen
    the demographic transition that has been reducing birthrates. It keeps
    the value of human life high, and reduces the number of us who wind
    up as cannon fodder.

    • Anonymous

      Think cloning or artificial drones. Of course a future interstellar civilization mostly won’t contain homo sapiens in their current biological form. It’s an ill-suited phenotype for either space colonization or warfare.

      • Jeffrey Soreff

        This is kind-of a reply to your Nov 22 post, but I’m putting it here
        to distinguish it from the discussion of something more human.

        You may be right, in the sense that the dominant (in some sense,
        the bulk of the mass? energy? computation?) phenotype of a
        long-future civilization is unlikely to be biological homo sapiens.
        Hanson’s original post talked of “descendants”, and it is not at
        all clear that such structures are descendants in any obvious way.

        I’m not saying this out of a purely speciesist bias against
        uploads/ems. Consider even a very broad view of human that
        takes any chunk of computation which is approximately human in
        scale and approximately human in autonomy to be human.
        Even _that_ might not exist in an equilibrium evolutionary mix
        (with no global coordination). Consider our existing communications
        technology: Optical fibers have been pushed to terabaud rates.
        A chunk of computing hardware with the equivalent of the
        storage and computation parameters of a human brain is likely
        to be packaged together with enough communications bandwidth
        to act more like a lobe of something larger than as an autonomous
        upload/em.

        If that happens, the question of whether “people” are
        being used as cannon fodder seems unanswerable. Nothing
        left in the mix looks (computationally) enough like a person for
        any of our intuitions about quality of life to be meaningful.

      • Anonymous

        Maybe an entity doesn’t need to be a person in order to have a “quality of life”. If it has even very rough equivalents of human pleasures and distress, that can count at least for utilitarian reasoning.

        As for whether or not such beings would be our “decendants”, I personally feel inclined to call them that metaphorically, if they are products of what human civilization later becomes or creates. If I’m not mistaken, Robin Hanson would probably consider ems to be our decentants. If it gets very alien and Borg-like, the psychological identification factors obviously decrease, but I personally don’t identify with most of humanity terribly much anyway.

      • Jeffrey Soreff

        Unfortunately, there aren’t really existing instances of structures
        that we can point to with human-sized chunks of computation
        but much larger communications bandwidths to use as examples
        to set our intuition. The closest example are probably pieces of
        our brains: What does it mean to talk about the quality of life
        of our visual cortex?

        By comparison, an em/upload is an as-close-to-isomorphic map
        of a person as technology will allow (if they ever get constructed -
        but that is a whole separate discussion). I have no problem calling
        an unmodified em/upload a descendant – but as Hanson himself
        has said, if ems/uploads are set up so that they can modify,
        replicate, and compete, and therefore evolve, they will rapidly
        move away from the current human norm. At that point, even just
        picking out what _scale_ of chunk of the computational ecology
        to identify with as “a” descendant becomes very arbitrary…
        What sized chunks would one even look within for analogs of
        pleasures and distress?

      • Anonymous

        What does it mean to talk about the quality of life
        of our visual cortex?

        I think it’s very colorful. ;)

        What sized chunks would one even look within for analogs of pleasures and distress?

        I’m not sure to what degree it is about size, I think it might be more fruitful to better study, in great detail, how distress and pleasures are implemented in the human brain and then create a formalism to look for analogues in other systems.

      • Jeffrey Soreff

        I think it might be more fruitful to better study, in great detail, how distress and pleasures are implemented in the human brain and then create a formalism to look for analogues in other systems.

        That seems reasonable. At least it gets away from the problem
        where every negative feedback system looks like it has an
        implicit “goal”, and a thermostat with a low setting in a hot room
        gets classified as being frustrated…

    • Konkvistador

      Don’t be silly. The demographic transition is a temporary affair, even in a purely classical non-em Homo Sapiens population.

      Evolution finds a way and under modern conditions is very rapid.

      • Jeffrey Soreff

        Show me your examples where demographic transitions have
        been reversed. There have been Malthusian arguments for
        centuries now, and they’ve been consistently wrong. Its a bad
        thing to be confident about. The data are against you.

      • http://www.gwern.net gwern

        You want data, Soreff, and the general point about evolution is lost on you and you have never heard of highly fertile subgroups like Orthodox Jews or the Amish who have massively increased their numbers even in countries considered to have undergone the demographic transition, and you demand higher level data than that? Well fine, here you go: http://www.nature.com/nature/journal/v460/n7256/abs/nature08230.html

      • Jeffrey Soreff

        Touche’ Gwern. I am quite aware of the highly fertile subgroups.
        And yes, I am aware of the general point about evolution making
        the fertile subgroups dominate. Nonetheless:

        a) Humans aren’t completely at the mercy of differential reproduction
        rates. Coordination may be hard, but it isn’t _impossible_. We can
        build institutions to detect and react to rapidly growing groups.
        The examples of Japan, South Korea, and Canada in the paper
        suggest that there is some choice of institutions that continues
        to avoid exponential growth, even in the high HDI regime where
        the authors see evidence of fertility increases.

        b) The same argument about rapidly growing groups could have
        been made in Malthus’s time – and some of the groups have
        growth rates a factor of two per generation above the general
        population. There have been enough generations since then for
        this to have had a major impact if it were _just_ a matter of
        differential growth rates. Nonetheless, the population as a whole,
        including highly fertile subgroups, _hasn’t_ bred itself back to
        third world subsistence levels. There are pieces missing from this
        picture.

        c) Yes, I thank you for the link. Note that, except for the single
        outlier at TFI~3 and HDI ~0.91, the data could just as well be
        interpreted as saying that the fertility rate goes approximately
        flat at around a TFI of 1.5 above an HDI of ~0.8. Except for the
        one outlier, the TFI is barely above 2.0 for any of the high
        development data points. This isn’t an explosion. There is
        enough time to look for countermeasures before drowning in
        a tidal wave of human flesh.

      • Anonymous

        Jeffrey Soreff and gwern, aside from the general relevance of your discussion, the OP was focused on a time when trillions go to war, which will of course be a time that sees AI war drones and rapid artificial reproduction of immediately functional intelligent beings with memories and experiences fully intact. If you talk about a time when both sides in conflict have supersoldier factories, your discussion is not applicable to the original point.

    • roystgnr

      Underpopulation keeps the value of human life high in the same sense that a nuclear holocaust would keep the value of fresh water high. If you think that inherent human dignity or unadulterated water is underappreciated, the best solution is increased demand, not reduced supply.

  • Alexander Kruel

    What is preferable, an empty universe or a universe with trillions at war? My gut feeling says war. There are other boring alternatives though.

    I might regret that though…

    It were much better that a sentient being should never have existed, than that it should have existed only to endure unmitigated misery. — Percy Bysshe Shelley

    In any case, the end of the universe already looks worse than war.

    After an unimaginable war over resources all this beauty will face its inevitable annihilation as the universe approaches absolute zero temperature.

    Imagine how many more entities of so much greater consciousness and intellect will be alive in 10^20 years.

    The end will be a slow torture to death, a torture that spans a possible period from 10^20 years up to the Dark Era from 10^100 years and beyond. This might be a period of war, suffering and suicide. It might be the Era of Death and it might be the lion’s share of the future.

    Possibly trillions of God-like entities will be slowly disabled due to a increasing lack of resources. This is comparable to suffering from Alzheimer’s, just much worse, much longer and without any hope.

    To exemplify this let’s assume there were 100 entities. At a certain point the universe will cease to provide enough resources to sustain 100 entities. So either they are going to kill each other or reduce their mental capabilities. This will continue until all of them are either killed or reduced to a shadow of their former self. This is a horrible process that will take a long time. I think you could call this torture until the end of the universe.

    So what if it is more likely that maximizing utility not only fails but rather it turns out that the overall utility is minimized, i.e. the relative amount of suffering increasing. What if the ultimate payoff is notably negative? If it is our moral responsibility to minimize suffering and if we are unable minimize suffering by actively shaping the universe, but rather risk to increase it, what should we do about it? Might it be better to believe that winning is impossible, than that it’s likely, if the actual probability is very low?

    • Anonymous

      You seem to disregard the possibility of well-negotiated resource distribution and ultimately suicide by the god-like entities. There’s also no real need to call something torture if those who experience it have the ability to switch off or tune down their own perception of suffering. An person with Alzheimer’s does suffer, but mostly from factors that intelligent benevolent high-tech intervention could prevent. And of course, if we actually “allowed” people to die rather than treating them like slaves, no one would actually have to go through a net-negative experience phase. These options will seem trivial by the standards of any intelligent entity. It’s mostly religious people who fail to meet such standards.

      The only danger I could see is darwinian forcing perception modes that can never accept self-destruction or that leave suffering involuntary. Maybe they are so adaptive that universal darwinism will never allow their phasing out, even by superintelligent standards. But that seems overly pessimistic.

      • http://theviewfromhell.blogspot.com Sister Y

        The only danger I could see is darwinian forcing perception modes that can never accept self-destruction or that leave suffering involuntary. Maybe they are so adaptive that universal darwinism will never allow their phasing out, even by superintelligent standards. But that seems overly pessimistic.

        Seems merely accurate to me, and explains our current situation very well. It may even provide a sort of biological upper limit on reflective intelligence.

      • Anonymous

        But we don’t currently have very effective mind-design tools at our hands. This will almost certainly change in the future; an organizational form that is capable of enabling quadrillions of entities can almost certainly alter motivational structures and perception modules with finesse. All sensory input will be there because it is deliberately chosen to be there.

        For instance, entities engaging in physical types of combat will have their motivational structure streamlined to reflect that. They will have no fear of pain or distress, they will not have our reptile brains that override the volitional parts of the frontal lobe because of tissue damage, they won’t stop to think how horrible their existence is because the thought will never even occur to them.

        Imagine if today, we had the ability to re-design the minds of factory-farmed animals at will. Would we keep them in a state of suffering? No, we would turn them into willing meat drones to be harvested, every single behavoir would be adapted to factory-farming. The suffering stems from a mismatch between modern purpose and legacy mind design.

    • Michael Wengler

      We have an extremely hard time predicting what will happen if we put a few thousand off-the-shelf electrical components together in a circuit. What makes you think we have ANY idea what the universe will look like 10^20 years from now? The end of the universe is even wackier to predict than the funner outcomes of global warning. Will it be an infinite expansion to 0 K? Will it be a maximum expansion at which point the universe will fall back in on itself and explode again? Blah blah blah.

      At the deepest level, you are putting the model before the reality. There isn’t a reason in the world to think that the physics we have developed is any more than rules-of-thumb that appy over the 10 or 20 orders of magnitude we have accessible to us. Thinking to the point of depression that one possible abstraction of some subset of those rules of thumb will apply to our actual universe at some (in my opinion literally) unforseeable future is just a fascinating human bias, that the model is the reality.

  • Anonymous

    >trillions at war
    >shades of our future
    >hopefully

  • Anonymous

    I wonder if the future contains drives toward soldiers that can cheaply be created and that never suffer. Imagine a personality type that is fearless and can’t perceive pain other than as a neutral informative error signal. These drones could be completely “happy” with their state of cannon fodder.

    In this case, the – currently probably accurate – adage “war is hell” could be downgraded to “war is a waste of useful resources”.

    I also wonder if future civilizations will recognize this and design superior conflict resolution options that prevent this mutual resource waste.

    • http://www.gwern.net gwern

      One wonders how much pain soldier-caste ants can feel, even relative to other ants.

      • Robert Koslover

        I don’t wonder about that. I simply presume it is zero, based on their miniscule brain sizes. Do you have a reason to think otherwise?

      • http://entitledtoanopinion.wordpress.com TGGP

        Sounds like a question for Alan Dawrst.

      • http://silasx.blogspot.com Silas Barta

        Individual ants are more like human cells than humans. If there is a level at which the concept of “pain” applies, it is probably at the colony level, or perhaps in just the queen.

      • Michael Wengler

        There is, I think, a pretty good chance they can’t feel any. The purpose of pain, in evolutionary terms, is to keep an organism alive by concentrating its attention on the pain, the source of the pain, and removing the pain. It seems fairly clear the solution adopted by ants in evolution is “lots of cheap disposable units” rather than “complex, effective, robust, repairable units.” To the extent that this analysis is right, there would be no evolutionarly reason for pain mechanisms associated with being dismembered to have been evolved, or to have been maintained in all the evolution that has obviously gone on in the development of the marauder ants.

      • http://theviewfromhell.blogspot.com Sister Y

        Everything depends on whether pain is cheap, in evolutionary terms. If by being vulnerable to extreme pain an organism gets a tiny fitness advantage, it will develop that advantage. Evolution doesn’t care if you’re in pain; it only cares how many copies of you get made. If suffering has little fitness cost or any fitness advantage, then evolution won’t economize on pain.

        The tendency to think it will is the tendency to imagine that the world is just.

      • Anonymous

        Pain isn’t evolutionarily cheap, even if its implementation is. It has behavioral costs: it prevents otherwise goal-seeking behavior and replaces it with stimulus avoidance. An organism that gives up seeking mates or food because most experiences associated with the search hurts it too much is a dysfunctional phenotype.

  • http://omniorthogonal.blogspot.com mtraven

    Trillions of ants is a lot, but it’s not like trillions of individuals as we think of them, so it’s probably a misleading way to think of it. There’s about a 100 trillion microbes in a single human body; they too are at war all the time.

    • Anonymous

      That doesn’t mean it’s an implausible prediction for a large future though.

  • Albert Ling

    but if an advanced grouping of beings wants to avoid war, don’t they just have to accelerate near to the speed of light and then just drift away from the crowd?

    the problem with current world is that everyone is stuck close to each other and can’t get away even if they want to.

    • Anonymous

      Energy costs?

    • Michael Wengler

      All the good technology and production comes from concentrations of organisms. When you run away from the population centers you leave the problems AND the benefits. And based on where we see people and other social animals now, it is pretty clear the benefits are much more important than the problems.

  • Rob W

    If trillions of creatures are fighting total wars with current and future tech, won’t they annihilate one another pretty quickly?

    • Mitchell Porter

      Perhaps you underestimate the possibilities. You blast your enemy’s solar system into atoms? But structures can exist in plasma; they might have planned to reconstitute themselves. You plan to chase them down and delete their control systems? But the universe is large; will you pursue them into intergalactic space? I can see conflicts going on for a very long time.