On Berserkers

Adrian Kent is getting a little publicity for posting his ’05 paper on the berserker hypothesis, “that evolution has very significantly suppressed cosmic conspicuity”, i.e., that many aliens are out there, but hiding from each other. He advocates taking the hypothesis seriously, but doesn’t actually argue for the coherence of any particular imagined scenario. Kent’s excuse:

It would be very difficult to produce a model that convincingly predicts the likelihoods and spatial distributions of the various strategies, since the answer surely depends on many unknowns.

He instead just claims:

The hypothesis is certainly not logically inconsistent and it seems to me not entirely implausible.

So what then is Kent’s contribution? Apparently it is a bunch of strategy fragments, i.e., strategy issues that aliens might consider in various related situations. It is not clear that these are much of a contribution, at least relative to the many contained in related science fiction novels. But, well, here they are:

Even granted an exemplarily stealthy attack and takeover, the mere fact that the previously conspicuous species B is no longer so gives a clue to observers elsewhere that some other species A, with its own potentially interesting resources, may now be in occupation — and hence that it may also perhaps be worth exploring the neighbourhood for other habitats that species A occupies. …

A really cautious predator might perhaps try to take over species B’s habitat while giving the impression that species B had self-destructed. This might or might not be believed: however good the cover story, it would presumably lose credibility if a number of independent species on different habitats in a given region appeared to self-destruct within a statistically implausibly short time interval. If B’s takeover is detected or inferred by species C, they might be tempted to jump in. But so might species D, E, and so on. …

Species may be induced to predate on conspicuous near neighbours even if their general strategy is to remain inconspicuous and avoid predation. Noisy neighbours are liable to attract unwelcome attention to the neighbourhood. One could perhaps run as far away as possible, but this requires finding another unoccupied and inconspicuous habitat. … There is the added danger that one risks becoming conspicuous to predators during the search. …

Assuming there is currently no dominant predator, any predators which attempted dominance in the past must have come to grief. (Perhaps this seems unlikely: if it was defeated by another predator, why would that predator not have come to dominate? And is it really plausible that a very powerful but reticent stay-at-home could, when threatened, have taken out a predator with galactic ambitions?

Kent seems to neglect the value of constructing any remotely plausible self-consistent equilibrium. We might gain great insight from such models, even if they are far from accurate on “likelihoods and spatial distributions of the various strategies.” Kent also seems to overestimate the resource value of inhabited places, relative to uninhabited places. His key assumption:

One imagines that an inhabited planet, together with the ecosystem it supports, constitutes a resource that would be valuable to (some significant subset of the) species originating on other planets.

Inhabited places might be a bit more valuable, but mainly they’d be of interest as potential competitors for all the other resources around.

I’d be interested in working with (math or sim) competent folks to more formally model berserker scenarios.

GD Star Rating
loading...
Tagged as: , , ,
Trackback URL:
  • David

    I agree that there is little value for an advanced civilization to take over inhabited planets. Such civilizations will be able to convert common inorganic matter into whatever they value (my guess: computers, and power generation systems to run them, plus hardware that supports further interstellar expansion).

    I find it impossible to coherently imagine an advanced civilization that judges itself to have “enough” computing power. To run artificial minds and other software, these civilizations will be churning out drones that autonomously mine mineral resources, build solar panels (or other power conversion hardware). They would also build more computers as well as copies of themselves. This autonomous operation could easily spread to other solar systems without any implausible propulsion system. Even if they used primitive hydrogen bombs for propulsion, the colonization of nearby solar systems would be well underway within mere centuries. So I definitely think that there is overwhelming motive and opportunity for a civilization to trigger a sphere of interstellar colonization whose outer edge expands at 0.1%c or more.

    If two such colonization spheres encounter each other, I think that war is basically inevitable. From a moral point of view, such a massive war would literally be the worst thing that could happen in the galaxy.

    From this, the conclusion is inescapable: It becomes a moral imperative that any emerging civilizations must be sterilized before they get a chance to colonize beyond their solar system. Doing this is less evil than waiting for the inevitable war of interstellar colonizing powers, on a pure utilitarian calculus. So every moral civilization in our galaxy will realize their duty to annihilate any other technological civilization.

    If they’re smart, they would do this in the least expensive way possible. They would almost certainly not fly here personally in spaceships. More likely, they would launch an attack drone which would replicate itself somewhere in our solar system and the resulting swarm of drones would autonomously seek out and destroy any signs of life in it. Data and perhaps samples for a “zoo” might be sent back to the home world.

    Since accelerating to a large fraction of c is impractical and expensive for any civilization, the destruction drone would only go as fast as is necessary for it to safely arrive before our interstellar colonization phase happens. In our case, there is no need to hurry. A probe can safely arrive in 200 years and its builders could be sure that it will preempt our interstellar colonization.

    So if the galaxy is moral and rational, there is a sterilization probe en route to our solar system right now, set to arrive in 200 years. I don’t think this is an implausible explanation of the Fermi paradox.

    • KrisC

      Certainly a rational solution, but not the most moral. Ultimatums, containment, debate, memetic engineering, even a policy of destroying all launched vehicles (forcing us to starve on local resources). All are more moral than extinction.

      One point of contention, I believe our civilization will be able to produce self-replicating interstellar probes within 200 years. While en route these probes could be very difficult to detect.

      • Hedonic Treader

        All are more moral than extinction.

        Unless the aliens consider error signals such as our pain to be ethical negends, while they have redesigned or minimized such signals in themselves. They might judge our existence net-negative due to the suffering our biosphere contains. If they’re average maximizing utilitarians, or negative utilitarians, while placing no particular value on our forms of biological and cultural diversity, our annihalation could be morally preferrable to our containment in their calculus. If they don’t care about our well-being, annihalation is simple and clean.

    • erik

      biological precursors to advanced technological societies (eg, bacteria) would be detectable via their effects on atmosphere, so the most moral sterilization (if we grant your absolutist moral system) would occur before anything like humans arose.

      luckily, we don’t have to worry about agreeing on morals — the dynamics of replicating systems depend on competition for resources, never morals. if a ‘moral’ civilization can be outcompeted, they die. morals arise intra-civilization, to improve group competitiveness through cooperation. if a moral strategy is most competitive, then we will be invited to cooperate, and only destroyed if we decline.

    • http://danweber.blogspot.com/ Dan Weber

      If two such colonization spheres encounter each other, I think that war is basically inevitable. From a moral point of view, such a massive war would literally be the worst thing that could happen in the galaxy.

      From this, the conclusion is inescapable: It becomes a moral imperative that any emerging civilizations must be sterilized before they get a chance to colonize beyond their solar system

      This logic seems broke. Even if “conflict between two interstellars cultures is the worst thing ever” is the rule, sterilization only works if done against a weaker opponent. (Against an equal opponent you cause what you seek to prevent). If the opponent is weaker, you can just get them to capitulate.

      Berserkers may not follow that morality, but they also wouldn’t care if war is moral.

  • http://www.staresattheworld.com Aurini

    Orion’s Arm – an open source post-human Science Fiction setting – has a chilling ‘legend’ attempting to explain the Fermi paradox: http://www.orionsarm.com/eg-article/480135af5ac81

  • http://www.hopanon.typepad.com Hopefully Anonymous

    Awesome. I’ve been waiting for years for you to do more quant modeling of some of you qualitatively expressed theorizing.

    “I’d be interested in working with (math or sim) competent folks to more formally model berserker scenarios”

  • William Newman

    I’ve never understood why these scenarios should be considered a plausible threat. I think there’s a good probability that our civilization will take no more than 100 years to start significant starfaring. As we understand physics, any threat capable of answering the Fermi question by shutting us down either needs to be capable of messing with causality itself (which could make it rather tricky to think about countermeasures…) or needs to be lurking within 50 light years. Given an adversary capable of spending the resources to garrison every 50ly sphere with a picket force capable of delivering an advanced-civilization-killing attack at near the speed of light over 50ly, it’s unclear to me why such an adversary wouldn’t instead spend (similar or lesser, I’d think) resources to build many much smaller, much slower (0.01c maybe) probes, and garrison each solar system with a probe. And unless we are thoroughly confused about where and how life can form, there probably are probably only a modest number of very promising sites in the 50ly sphere, few enough that the adversary could garrison each promising site with a second wave of Berserker Inquisitor highly capable paranoid probes, more than capable of sending back urgent and detailed tut-tut-careful-here messages by the time our ancestors developed nervous systems or specialized immune cells, and still have spent less (of broad capability measures like interstellar delta-momentum and maintenance of complex systems over millions of years) than would be required for the long-range last-minute CivilizationSniper system.

    So unless someone has a good reason that the probes solution is unlikely to be less practical than CivilizationSniper floating so far away that it didn’t notice our biosphere long ago, it seems to me that a civilization which falls into optimizing “kill ‘em all” is not a problem soluble by trying to hide. Of course, a civilization could fall into optimizing something we don’t understand, thus possibly producing CivilizationSnipers just as it could possibly produce a galaxy of paperclips. But a civilization optimizing something we don’t understand would be so fundamentally strange and arbitrary that betting about seems fallacious in one of the same ways as Pascal’s Wager: it’s privileging the unlikely hypothesis that the inscrutably insane violent neighbor only shoots neighbors that don’t try to hide over the equally-plausible-sounding unlikely hypothesis that the inscrutably insane violent neighbor only shoots neighbors that *do* try to hide.

    • David

      All very good points, especially the one about the probable economy of slow but ubiquitous monitor drones that send out warnings when they notice biology getting complex.

  • Phil Goetz

    A problem with conquering a distant star system is that the distances are so large, that it’s better to exploit the resources in-place than ship them back. This means the empire that organizes and funds the conquest won’t benefit from it.

    • erik

      wrong — if you are eliezer, you benefit by not being turned into paper clips in the accident you are sure others will cause. i don’t know why he doesn’t take solace in observing that we see no galaxies made of paper clips.

  • http://daedalus2u.blogspot.com/ daedalus2u

    The Great Filter is ahead of us and I don’t think we will survive it. I think we have a better shot at it than other alien civilizations because of some idiosyncratic aspects of Earth and human physiology, but I don’t think they are enough. There are some that can see and understand what those difficulties are, but collectively humans can’t address them because not addressing them is more profitable and produces more short-term benefits than addressing them.

    I think we are in the midst of a filtering event right now, and that we collectively can’t address it is symptomatic of why eventually there will be a filtering event that we don’t survive. The current filtering event is AGW, but a microcosm of that is the current balancing of the US budget on the backs of the poor.

    There is an editorial that mentions what the root of the problem is; subsidizing the most wealthy at the expense of the less wealthy (but not in so many words). It is the externalizing of costs and the internalizing of profits. The privatization of profits and the socialization of losses.

    http://www.thedaily.com/page/2011/03/24/032411-opinions-column-japan-dalmia-1-2/

    The liability cap that nuclear power plants have is an enormous subsidy that is pretty easy to see. The limits on corporate liability are no different. Lobbying by corporations that reduces their taxes is just another subsidy. The government has to be funded by something, when the wealthy avoid being taxed, the tax burden falls on the less wealthy. That is a subsidy.

    When individuals prioritize their own comfort (note it is “comfort” not survival) over the survival (note it is “survival”, not comfort) of arbitrarily many other humans, it sets up an unsustainable dynamic where the many survival seekers will eventually be compelled to destroy the individual comfort seekers in order to survive. That is the dynamic going on in Libya which typifies one of the rules of conflict. Never wound a king. Never deal a non-fatal blow to someone who can mobilize sufficient resources to fight back and kill you.

    The tragedy of the commons is an example of trying to externalize your costs while internalizing your profits. That only can work so long as there are external resources that can be tapped. When there are no external resources left, consumption has to go down. That is why essentially every fishery that has ever been exploited has been overexploited until it was destroyed.

    Dealing with AGW is a cost that producers and consumers of fossil fuels have externalized. Externalization doesn’t make the cost go to zero, it just makes it zero on the balance sheet of the entity that has externalized it.

    It is xenophobia that allows individuals to prioritize their own comfort over the survival of others. To produce a galaxy spanning civilization, the problem of xenophobia has to be solved. Unless it is, then as populations on different planets genetically drift apart (as they must), and they become different species, they will exhibit xenophobia and make genocidal war on each other.

    It only takes one new species exhibiting genocidal xenophobia to wipe out all the non-xenophobic species. Then as that xenophobic species drifts and forms new species, some of them will be xenophobic too.

    The patriarchal alpha male phenotype has to express the equivalent of xenophobia to be able to compete against other males for females (or for anything). The alpha male has to take resources for himself and deny them to other males or he isn’t the alpha male. When those taken resources are used to take and maintain more resources, the alpha male has to exhibit xenophobia to deny resources to others because those resources might be used against him.

    If we do find a galaxy spanning civilization, it won’t be a patriarchy, or any other kind of oligarchy. Unless we find a way to permanently and irreversibly shape human society into a non-oligarchy, we will filter ourselves out.

    I appreciate that those at the top of the social power hierarchy can’t appreciate that what got them to the top and keeps them there is unsustainable and will eventually destroy humanity. Of course they can’t appreciate it, if they could, they wouldn’t be at the top.

    • Hedonic Treader

      To produce a galaxy spanning civilization, the problem of xenophobia has to be solved. Unless it is, then as populations on different planets genetically drift apart (as they must), and they become different species, they will exhibit xenophobia and make genocidal war on each other.

      There is the possibility of creating forms of life that don’t mutate. Hypothetically, a singleton can start from earth with a distribution and propagation algorithm that assures non-mutation, coordination, and utility optimization from the start. However, such a system would probably have a hard time adapting to unforseen challenges. The alternative would be much harder to quality-control and coordinate. There would be room for much conflict, and enormous suffering, which begs the question if it should really be considered a good thing to allow sentient life to spread beyond this solar system. I find that questionable.

      The current filtering event is AGW

      There are several ones, but AGW is a relatively improbable candidate. “Three billion people starving” is not the same thing as a filtering event.

      • http://daedalus2u.blogspot.com/ daedalus2u

        I hope you are right that AGW is not a filtering event. There are non-implausible events associated with AGW that could be much worse than 3 billion dead. When the forests of Canada and Russia get too hot, even for a few days, they will die, then bake in the summer sun until they burn from coast to coast. Their soot will temporarily cause global cooling, but then warming will come back more severe than ever. With the vegetation gone, the soil will wash to the ocean. The next cycle of heat will be even hotter, and will kill the vegetation that sprouts from the seeds not washed to the ocean which will also burn. When will it stop? When there is nothing left to burn, nothing left to wash to the sea.

        If the ocean circulation stops, eventually all the methane hydrate in the ocean becomes unstable due to geothermal heat. There is a twice as much carbon stored as methane hydrate in the ocean than there is in all other fossil fuel combined (recoverable and unrecoverable) and methane is about 10x worse a GHG than is CO2. Greenhouse effects might not be the worst thing that could happen. If the ocean becomes anoxic, bacteria may reduce sulfate to H2S. Levels in the atmosphere could reach 100 ppm, world wide. Even plants can’t survive that.

        Any region that has a wet bulb temperature above 35 C for more than a few hours a day is not habitable by humans. People who live in such places are not going to just sit there and die peacefully. Desperate people do desperate things. Would they attempt to depopulate regions that remain habitable so they can move there? Presumably they would. If they use biological control methods, those could get out of hand.

        The quantities of carbon stored in the permafrost are not small and are unknown. It is not at all clear that the positive feedback that happens once they start to melt will keep large regions of the Earth habitable.

        You might be right AGW might not be a “filtering event”. But the mindset that treats “might not be a filtering event” the way humans are treating AGW will prevent effective response to any filtering event that is happening. Those who ignore it will have more resources to fight with those who are trying to deal with it. It is like Easter Island. What was the person who cut down the last tree thinking? He must have been thinking that something magical was going to happen to restore the forests, or maybe something like the Rapture or the Singularity so the forests didn’t matter. Was deforestation a “filtering event”? Not really, they did survive that, only ¾ of the population died. But that left them more vulnerable to other adverse events, each of which cut the population still more.

        Filtering events don’t have to cause extinction. Reversion to a theocratic feudal society could be irreversible. If enough genes that allow being something other than a mindless theocrat get deleted, the rest might be suppressed and eventually filtered out too.

      • Wonks Anonymous
      • Hedonic Treader

        “What was the person who cut down the last tree thinking?”
        http://www.independent.co.uk/environment/nature/rats-not-men-to-blame-for-death-of-easter-island-431105.html

        Wonks Anonymous, thanks for this link. It never for one minute occured to me to question the historical accuracy of this gloomy parable against human hubris.

      • Wonks Anonymous

        Hedonic Treader, there is a fuller paper that the article is based on, but I forget where. You might be able to find it through googling, unless it’s been locked down since I last read it.

      • http://daedalus2u.blogspot.com/ daedalus2u

        The researcher who was quoted in the article has papers posted on his web site.

        http://www.anthropology.hawaii.edu/People/Faculty/Hunt/index.html

        His evidence is pretty convincing. That makes the slave raiding and smallpox introduction in the 19th century the actual cause of the collapse.

  • Hedonic Treader

    To filter or not to filter…

    If enough genes that allow being something other than a mindless theocrat get deleted, the rest might be suppressed and eventually filtered out too.

    You’re confusing memes and genes here. We’re not that genetically different from our medieval ancestors. Even my grandmother still believed in the existence of an invisible superghost. In case of civilization collapse without human extinction, many of our sophisticated technologial, scientific and epistemological memeplexes would survive in storage media outside of human skulls. Competing future tribes or societies in state of recovering population numbers would seek them out for curiosity’s sake or to gain technological advantages over their competition. This recovery from backup could be faster than you think.

    It would take a process that
    a) kills off all humans,
    b) destroys all memeplex storage media, or
    c) converts all available resources into high-entropy garbage that cannot be re-used as a resource by memetic trial and error.

    Anything less won’t be a filtering event.

    The extreme way you describe the possible outcomes of AGW, this could qualify. A very fast Venus Syndrome scenario would do the trick. But there’s a lot of conditions that have to meet in conjuction (remember conjunction fallacy) to lead to such devastating outcomes.

    If a small fraction of human population survives, as long as their environment recovers sufficiently to sustain their basic existence, the evolutionary basis for civilization isn’t destroyed. Since homo sapiens is globally distributed, there would probably be local tribes of survivors all over the planet. Since their high-tech influence on warfare and environment would be temporarily reduced, it is hard to see what could kill them all off simultaniously during the recoving period (barring the normal statistical risks of asteriods and supervolcanos).

    Maybe a re-industrialization phase will be impossible due to the lack of fossil fuels that we had available. This is a possibility. OTOH, with sufficient memeplex recovery, they’ll probably be able to build an electric grid plus first renewable energy sources relatively quickly for basic functioning, and then work from there.

    Similar recovery modes are possible after nuclear winters, asteroid hits, supervolcano outbreaks, pandemics etc. unless either a), b) and c) happens globally.

    However, as I pointed out before, I don’t think it is straightforward that we should avoid such risks. What does a filtering event mean? It means that earth-originating life doesn’t create a galaxy-spanning or even intergalactic colonization process. I recently read a paper that estimated there could be about 10^40 additional life-years resulting from this.

    That would be good, right? Maybe yes. Or maybe no. Are we talking about post-Abolitionist minds that are free from suffering by design? Are we talking about 10^30-10^40 additional torture victims? Will existence itself be voluntary for these minds? Will they desperately wish they had never been forced into existence? Can we predict this?

    Nick Bostrom and others are right in pointing out that this is far more relevant on Utilitarian grounds than any other question. What I find troubling is the ease in which these authors jump to the assumption that life is probably generally worth living, therefore it must be good to create all these additional sentient entities. Doesn’t a number like 10^40 call for an extremely thorough analysis of the reasons and conditions under which this assumption is actually true?

  • http://daedalus2u.blogspot.com/ daedalus2u

    That sentiment would strongly argue for working to ensure that life is worth living for all sentient organisms, and that there is improvement such that each future life is more worth living than present lives. That is to maximize the “lives worth living” metric.

    But that gets back to my first point that when the few prioritize their comfort over the needs of the many, the “lives worth living” metric is not maximized.

    I said that would ultimately be the root cause of the great filter and now you are saying if that heuristic can’t be changed that there should be a filter so that the as yet unborn many don’t suffer through lives not worth living.

  • http://www.hopanon.typepad.com Hopefully Anonymous

    Hedonic Treader and Daedalus,

    The two of you are somewhat like me, so does it feel zombie-like at all for you to consider policy today on the basis of “future unborn lives”? It reminds me conceptually of St. Francis preaching to the birds.

    • Hedonic Treader

      Does it ever feel zombie-like to you to buy groceries for your not-yet-existing hungry future self – at a time where you’re not yet hungry? Conceptions of personal identity are very intuitive but can be misleading. When you delay gratification in order to do stuff for future versions of yourself, you are an altruist in a sense. Sentient affect (pleasure, pain etc.) is a local phenomenon, It’s dubious that it is somehow bound to a kind of soul pearl or self that persists through time. Unless there is some rational defense of such a metaphysical position, buying groceries should feel just as zombie-like as thinking about the long-term future.

      There is a difference in predictability, of course.

      • http://www.hopanon.typepad.com Hopefully Anonymous

        Hedonic,
        I’m aware that almost everything increases incoherence under close examination. But I’m operating with a 3 lb brain attempting to model a huge universe.

        Doing things like buying groceries I sort as part of earthy survivalism -it’s why I’d prefer cryogenic preservation to mind uploading, and why I’d prefer SENS to either.

        We seem to have a fixed resource pie of attention, analysis, and energy to devote to both short term and long term challenges (let alone optimizing things for the unborn) -I think the great bulk of our resources should go to our pressing short term challenege (like the fact you, me, and daedalus are aging!) because we probably will fail to solve them anyways, and so we might as well give it our best shot.

        $1 to solving aging and $10 to help our descendants (or our future selves) escape our sun’s future supernova seems nutty to me.

      • Hedonic Treader

        Hopefully Anonymous, I don’t care that much about aging. I’m 30 now, I take seriously the probability that I’ll be dead in 10 years, and I plan to be almost certainly dead by 60-70. I’m not looking forward to it, but I do hyperbolic discounting in my personal life.

        During this time, I enjoy my life of course. Hyperbolic discounting has a point when diminishing returns are involved. The desperate fight against aging seems an obvious example to me, as well as the accumulating probability of accidents and other personal hazards. Some indicators of personal well-being may increase while we get older, but other risks – some of them underestimated in my view – increase as well.

        I’ve known people existing in net-negative states, after having a stroke, for years, decades even. In order to reduce this risk by, say, 20%, I would give up some additional years of life. As a consequence, pre-emptive suicide becomes a rational option, and I’m quite pissed that I’m not allowed to go to a fucking drug store and buy a fucking suicide drug as a free citizen who wants to execute self-ownership rationally.

        OTOH, hyperbolic discounting can be a mistake when you can in fact expect exponential growth of utility. Under the “happiness assumption”, which says that life in the future will be good rather than bad, reductions in existential risks would be awesome. I’m not sure I completely follow this assumption, however. (There is a good discussion thread here.)

      • http://www.hopeanon.typepad.com Hopefully Anonymous

        You personally want to sacrifice your persistence odds for your nonsuffering/happy years of life. I get that. I think it gets more aggressive with what interpret to be your preference to (scare quotes) “maximize happiness of the unborn over persistence odds of the fellow living”.

      • Hedonic Treader

        I understand where you’re coming from, but I really value the hedonic average + diversity more than individual survival. The reason for this is how I see the nature of consciousness and personal identity.

        I just watched Logan’s Run, and when everything exploded in the end, I thought, “And there goes utopia…”

        I’d rather live a fun life free from suffering and boredom until I’m 30 than a miserable one until I’m 300. If we can do both, I’m all for it though.

  • http://daedalus2u.blogspot.com/ daedalus2u

    It doesn’t feel zombie like to me. I am a parent and it isn’t difficult for me to plan for unborn future generations. That is what I was doing when I raised my children to be good parents and to be good human beings. That is a reason why I chose the work I am doing, environmentally related and health related. I try to make the world a better place. The most important aspect of that is raising children who try to make the world a better place too.

    For my children and their children and their children to live good lives, there has to be a stable gene pool, a decent environment, and a sustainable economy. I am an engineer, and I am quite sure that there are no technical difficulties in having an economy that sustainably supports 10 billion people on Earth with all of them having a decent lifestyle. What keeps us from having that is politics and people problems, not technical problems. Mostly it is people putting their wants above other people’s needs. Not because they have to, but because they can.

    • http://www.hopanon.typepad.com Hopefully Anonymous

      Ok, that’s an honest answer. I sort it with “mmmm … brains” and other social aesthetics that I consider to be zombie variant.

      Basically it’s an aesthetic of how to travel to information theoretic death and I’m held a bit hostage to it, like Ford Prefect in Hitchiker.

    • http://daedalus2u.blogspot.com/ daedalus2u

      Just to clarify, I don’t subscribe to the idea of continuity of personal conscious identity. I appreciate that it sometimes feels that way, but I have learned that feelings are not always sufficiently reliable to trust and certainly not for very long term planning.

      In thinking about my obligation to my children, I consider it to be “timeless”, that is my obligation to them occurred before they existed, but that timeless obligation only comes into play if they actually do exist at some point. This is my motivation for preparing myself to be a good parent even before they exist. This allows me to consider their welfare over their entire lifespan, not just in the moment. Then by recursion I can apply the same considerations to their children and to the ancestors of their nth degree children (i.e. essentially all human beings).

      This is how I think of obligations to other and as yet unborn other entities too. If they ever will exist, my obligation to them is timeless and occurs before they do exist and that obligation is occurring right now. This is also the basis for my actions today to provide for the future entity that will be instantiated within my physical body. My future self is not self-identical with my present self. My obligations to each of my future selves are timeless and not that different than my obligations to any other entities.

      The mindset that advocates of cryonics have that prioritizes the survival of what they feel is a personal conscious entity seems petty and selfish to me. It is pretty clear to me that all entities are not self-identical over their lifespan, and the changes that will occur when transferring any “entity” from a degraded and then frozen brain will (very likely) be very large compared to the differences between different living human beings that exist now.