Monthly Archives: June 2011

Sleeping Beauty’s Assistant

The Sleeping Beauty problem:

Sleeping Beauty goes into an isolated room on Sunday and falls asleep. Monday she awakes, and then sleeps again Monday night. A fair coin is tossed, and if it comes up heads then Monday night Beauty is drugged so that she doesn’t wake again until Wednesday. If the coin comes up tails, then Monday night she is drugged so that she forgets everything that happened Monday – she wakes Tuesday and then sleeps again Tuesday night. When Beauty awakes in the room, she only knows it is either heads and Monday, tails and Monday, or tails and Tuesday. Heads and Tuesday is excluded by assumption. The key question: what probability should Beauty assign to heads when she awakes?

The literature is split: most answer 1/3, but some answer 1/2 (and a few give other answers). Here an interesting variation:

Imagine sleeping beauty has a (perhaps computer-based) assistant. Like Beauty, the assistant’s memory of Monday is erased Monday night, but unlike Beauty, she is not kept asleep on Tuesday, even if the coin comes up heads. So when Beauty is awake her assistant is also awake, and has exactly the same information about the coin as does beauty. But the assistant also has the possibility of waking up to see Beauty asleep, in which case the assistant can conclude that it is definitely heads on Tuesday. The key question: should Beauty’s beliefs differ from her assistant’s?

Since the assistant knows that she might awake to see Beauty asleep, and conclude heads for sure, the fact that the assistant does not see this clearly gives her info. This info should shift her beliefs away from heads, with the assistant’s new belief in heads being less than half. (If she initially assigned an equal chance to waking Monday versus Tuesday, her new belief in heads is one third.) And since when Beauty awakes she seems to have exactly the same info as her assistant, Beauty should also believe less than half.

I can’t be bothered to carefully read the many papers on the Sleeping Beauty problem to see just how original this variation is. Katja tells me it is a variation on an argument of hers, and I believe her. But I’m struck by a similarity to my argument for common priors based on the imagined beliefs of a “pre-agent” who existed before you, uncertain about your future prior:

Each agent is asked to consider the information situation of a “pre-agent” who is not sure which agents will get which priors. Each agent can have a different pre-agent, but each agent’s prior should be consistent with his pre-agent’s “pre-prior,” in the sense that the prior equals the pre-prior conditional on the key piece of information that distinguishes them: which agents actually get which priors. The main result is that an agent can only have a different prior if his pre-agent believed the process that produced his prior was special. (more)

I suggest we generalize these examples to a rationality principle:

The Assistant Principle: Your actual beliefs should match those of some imaginable rational (perhaps computer-based) assistant who lived before you, who will live after you, who would have existed in many other states than you, and who came to learn all you know when you learned it, but was once highly uncertain.

That is, there is something wrong with your beliefs if there is no imaginable assistant who would now have exactly your beliefs and info, but who also would have existed before you, knowing less, and has rational beliefs in all related situations. Your beliefs are supposed to be about the world out there, and only indirectly about you via your information. If your beliefs could only make sense for someone who existed when and where you exist, then they don’t actually make sense.

Added 8a: Several helpful commenters show that my variation is not original – which I consider to be a very good thing. I’m happy to hear that academia has progressed nicely without me! 🙂

GD Star Rating
loading...
Tagged as: ,

Stross on Singularity

I’ve long enjoyed the science fiction novels of Charlie Stross, so I’m honored that he linked to my Betterness Explosion from his Three arguments against the singularity:

I periodically get email from folks who, having read “Accelerando”, assume I am some kind of fire-breathing extropian zealot who believes in the imminence of the singularity, the uploading of the libertarians, and the rapture of the nerds. … It’s time to set the record straight. … Santa Claus doesn’t exist. …

(Economic libertarianism is based on … reductionist … 19th century classical economics — a drastic over-simplification of human behaviour. … If acted upon, would result in either failure or a hellishly unpleasant state of post-industrial feudalism.) …

I can’t prove that there isn’t going to be a hard take-off singularity in which a human-equivalent AI rapidly bootstraps itself to de-facto god-hood. Nor can I prove that mind uploading won’t work, or that we are or aren’t living in a simulation. … However, … the prospects aren’t good.

First: super-intelligent AI is unlikely because … human-equivalent AI is unlikely. … We’re likely to leave out … needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own. … We clearly want machines that perform human-like tasks. … But whether we want them to be conscious and volitional is another question entirely.

Uploading … is not obviously impossible. … Imagine most of the inhabited universe has been converted to a computer network, … programs live side by side with downloaded human minds and accompanying simulated human bodies. … A human mind would lumber about in a massively inappropriate body simulation. … I strongly suspect that the hardest part of mind uploading … [is] the body and its interactions with its surroundings. …

Moving on to the Simulation Argument: … anyone capable of creating an ancestor simulation wouldn’t be focussing their attention on any ancestors as primitive as us. … This is my take on the singularity: we’re not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we’re going to see is increasingly solicitous machines defining our environment … We may eventually see mind uploading, but … our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it. …

The simulation hypothesis … we can’t actually prove anything about it. …. Any way you cut these three ideas, they don’t provide much in the way of referent points for building a good life. … It’s unwise to live on the assumption that they’re coming down the pipeline within my lifetime.

Alas Stross’s post is a bit of a rant – strong on emotion, but weak on argument. Maybe Stross did or will explain more elsewhere, but while he makes clear that he doesn’t want to associate with singularity fans, Stross doesn’t make clear that he actually disagrees much. Most thoughtful singularity fans probably agree that where possible hand-coded AI would be designed to be solicitous and avoid human failings, that simple unmodified upload minds are probably not competitive creatures in the long run, and that only a tiny fraction of our distant descendants would be interested in simulating us. (We libertarian-leaning economists even agree that classical econ greatly simplifies.)

But the fact that hand-coded AIs would differ in many ways from humans says little on the key issues of when AI will appear, how fast they’d improve, how local would be that growth, and how fast the world economy would grow as a result. The fact that eventually unmodified human uploads would not be competitive says little on the key issues of whether uploads come before powerful hand-coded AI, how long nearly unmodified uploads would dominate, or just how far from humans would be the most competitive creatures. And the fact that few descendants would simulate ancestor humans says little on the key question of how that small fraction multiplied by the vast number of descendants compares to the actual number of ancestor humans. (And the fact that classical econ greatly simplifies says little on the pleasantness of libertarian policies.)

Stross seems smart and well-read enough to have interesting things to say on these key questions, if only he can overcome his personal revulsion against affiliating with singularity fans, to directly engage these questions.

GD Star Rating
loading...
Tagged as: , , ,

Regulating Cool

The [US FDA] unveiled a plan designed … to shock customers with images of tobacco’s impact: sick smokers exhaling through a tracheotomy hole, struggling for breath in an oxygen mask and lying dead on a table with a long chest scar. Starting next year, cigarette cartons, packs and advertising will feature these and six other graphic warnings, replacing the discreet admonitions that cigarette manufacturers have been required to offer since 1966. …

Some of the images, particularly the warning depicting a diseased mouth, are specifically aimed at dispelling the notion for teens that smoking is cool. “We want kids to understand smoking is gross, not cool, and there’s really nothing pretty about having mouth cancer or, you know, making your baby sick if you smoke,” said FDA Commissioner Margaret A. Hamburg. “So some of these are very driven to dispelling the notion that somehow this is cool, and makes you cool.” (more)

Pause to consider the logic here. We decide it is not a good idea to let the government ban this product, or to require a doctor’s prescription to consume it. We think everyone should be allowed to consume it if they choose. But, we also decide it is a good idea to let government to decide if this product can seem “cool.” In general, the idea must be that if people see the wrong things as cool, the government can require appearance changes, changes the government guesses will make those overly-cool things seem less cool.

For example, if too many kids see not going to college as cool, well then maybe only college students and graduates should be allowed to wear certain sorts of cool clothing. Or if too many think going to the beach is cool, resulting in too much skin cancer, we could broadcast uncool music at the beach.

The basic question is when should the government ban an activity versus merely discouraging it, and what sort of discouragements it should wield. Discouraging activity via reducing its appearance of “cool” seems to me especially hard for distant slow federal regulators to manage — what things seem “cool” often varies in quite subtle ways over short times and between subcultures. Is there any argument that this sort of discouragement is especially useful, to compensate for such added difficulty?

Actually, I see a fundamental contradiction in the idea of government regulating “cool.” While we have many social processes which tell us about what others might approve or disapprove, the “cool” process seems inherently decentralized, and not to be mediated by authorities. We the masses are supposed to each decide what we think is “cool,” and we are not supposed to accept declarations by teachers, employers, etc. on the subject. Whatever authorities recommend as a good idea, it can only accidentally be “cool.”

“Cool” just doesn’t seem the sort of thing government can actually regulate.

GD Star Rating
loading...
Tagged as: ,

The Betterness Explosion

We all want the things around us to be better. Yet today billions struggle year after year to make just a few things a bit better. But what if our meagre success was because we just didn’t have the right grand unified theory of betterness? What if someone someday discovered the basics of such a theory? Well then this person might use his basic betterness theory to make himself better in health, wealth, sexiness, organization, work ethic, etc. More important, that might help him make his betterness theory even better.

After several iterations this better person might have a much better betterness theory. Then he might quickly make everything around him much better. Not just better looking hair, better jokes, or better sleep. He might start a better business, and get better at getting investors to invest, customers to buy, and employees to work. Or he might focus on making better investments. Or he might run for office and get better at getting elected, and then make his city or nation run better. Or he might create a better weapon, revolution, or army, to conquer any who oppose him.

Via such a “betterness explosion,” one way or another this better person might, if so inclined, soon own, rule, or conquer the world. Which seems to make it very important that the first person who discovers the first good theory of betterness be a very nice generous person who will treat the rest of us well. Right?

OK, this might sound silly. After all, we seem to have little reason to expect there is a useful grand unified theory of betterness to discover, beyond what we already know. “Betterness” seems mostly a concept about us and what we want – why should it correspond to something out there about which we can make powerful discoveries?

But a bunch of smart well-meaning folks actually do worry about a scenario that seems pretty close to this one. Except they talk about “intelligence” instead of “betterness.” They imagine an “intelligence explosion,” by which they don’t just mean that eventually the future world and many of its creatures will be more mentally capable than us in many ways, or even that the rate at which the world makes itself more mentally capable will speed up, similar to how growth rates have sped up over the long sweep of history. No, these smart well-meaning folks instead imagine that once someone has a powerful theory of “intelligence,” that person could create a particular “intelligent” creature which is good at making itself more “intelligent,” which then lets that creature get more “intelligent” about making itself “intelligent.” Within a few days or weeks, the story goes, this one creature could get so “intelligent” that it could do pretty much anything, including taking over the world.

I put the word “intelligence” in quotes to emphasize that the way these folks use this concept, it pretty much just means “betterness.” (Well, mental betterness, but most of the betterness we care about is mental.) And this fits well with common usage of the term “intelligence.” When we talk about machines or people or companies or even nations being “intelligent,” we mainly mean that such things are broadly mentally or computationally capable, in ways that are important for their tasks and goals. That is, an “intelligent” thing has a great many useful capabilities, not some particular specific capability called “intelligence.” To make something broadly smarter, you have to improve a wide range of its capabilities. And there is generally no easy or fast way to do that.

Now if you artificially hobble something so as to simultaneously reduce many of its capacities, then when you take away that limitation you may simultaneously improve a great many of its capabilities. For example, if you drug a person so that they can hardly think, then getting rid of that drug can suddenly improve a great many of their mental abilities. But beyond removing artificial restrictions, it is very hard to simultaneously improve many diverse capacities. Theories that help you improve capabilities are usually focused on a relatively narrow range of abilities – very general and useful theories are quite rare.

All of which is to say that fearing that a new grand unified theory of intelligence will let one machine suddenly take over the world isn’t that different from fearing that a grand unified theory of betterness will let one better person suddenly take over the world. This isn’t to say that such an thing is impossible, but rather that we’d sure want some clearer indications that such a theory even exists before taking such a fear especially seriously.

GD Star Rating
loading...
Tagged as: ,

Travel Made Humans

I hadn’t till now appreciated how central long distance travel was to early human evolution. A 2004 Nature article:

No primates other than humans are capable of endurance running. … Well-conditioned human runners … can occasionally outrun horses over the extremely long distances that constrain these animals to optimal galloping speeds, typically a canter. … Horses have … narrow ranges of preferred speeds for trotting and galloping and gait transitions that minimize cost. … Human runners differ from horses in employing a single gait. … Humans are thus able to adjust running speed continuously without change of gait or metabolic penalty over a wide range of speeds. …

Considering all the evidence together, it is reasonable to hypothesize that Homo evolved to travel long distances by both walking and running… Endurance running is not common among modern hunter-gatherers, who employ many technologies to hunt (for example, bows and arrows, nets and spearthrowers), thereby minimizing the need to run long distances. But Carrier has hypothesized that endurance running evolved in early hominids for predator pursuit before these inventions in the Upper Palaeolithic (about 40kya). Endurance running may have helped hunters get close enough to throw projectiles, or perhaps even to run some mammals to exhaustion in the heat. …

Another hypothesis to explore is … in the open, semi-arid environments … early Homo may … have needed to run long distances to compete with other scavengers, including other hominids. … Similar strategies of ‘pirating’ meat from carnivores are sometimes practised by the Hadza in East Africa. … It is known that major increases in encephalization occurred only after the appearance of early Homo. … Endurance running may have made possible a diet rich in fats and proteins thought to account for the unique human combination of large bodies, small guts, big brains and small teeth.

A 2009 Evolutionary Anthropology article on “The Emergence of Human Uniqueness”

Important preadaptations in the genus Homo … led to human uniqueness. First, hominins are bipedal and, as a result, cover geographical ranges far larger than other apes do. Even hunter-gatherers living in tropical forests have daily home ranges that are two to three times those of chimpanzees, and lifetime home ranges more than two orders of magnitude greater. Thus, individual hominins faced more environmental variability than do chimpanzees. … This would favor social learning capacity.

Second, bipedal hominins evolved exceptional manual dexterity because their hands were freed from locomotory constraints, and they could carry tools with little cost. This would have favored increased tool using and making behavior and probably increased selection pressure on imitative capacities as well. Third, by at least 2 million years ago, hominins had begun to depend on high-quality, widely dispersed resources that were difficult to obtain. This shift of feeding niche had important life-history implications. Juveniles could not fully feed themselves due to the complexity of the extractive niche, and this led to their provisioning by close kin. As large package foods became common, the foods returned to the juvenile home base were probably ‘‘shared’’ by coresidents. This …. might partially explain why hunter-gatherers experience early adult mortality at one-fifth the rate of wild chimpanzees. That pattern would favor a life history with later age at maturity and delayed onset of senescence. … Continue reading "Travel Made Humans" »

GD Star Rating
loading...
Tagged as: ,

Revised Growth Claim

Me in March:

Non-democracies seem more our future than democracies, because while the two groups have the same average economic growth rates, non-democracy rates vary more, and high rates dominate. … Whenever you have a portfolio of items with different (log) growth rates, your portfolio’s long run average return is dominated by the portfolio items with the highest average rates.

William Easterly in May (see also):

The large literature on growth regressions has demonstrated that there is no easy answer to separating out the partial correlation of one particular variable from a long list of other equally plausible variables. The scope for specification searching leads to results that are not credible. The same situation holds here. Indeed, it is not hard with the above variables to produce regressions either showing autocracy to be statistically significant with other controls or statistically insignificant.

His point is valid. So let me revise my claim: Some nations appear to have more variance in general, which manifests itself in more variation in growth, leaders, and forms of governance. What exactly causes this higher variance is unclear, as is how much some kinds of variance encourage other kinds. It makes sense for democracy to reduce policy variance, but we can’t tell how important is that channel relative to other channels.

But whatever causes this variance, the higher variance set of nations should eventually dominate wealth, because peak growth rates dominate wealth. And if having dictators more often continues to be part of that mix, then rich nations will eventually have a lot of dictators among them.

GD Star Rating
loading...
Tagged as: , ,

Raid The Rich

Me in March on Why Track Trends?:

Tyler’s thesis is that the US has slower growth than decades ago because we’ve used up the low hanging fruits. … My grad school (Caltech) didn’t teach macro … so I’ll stay agnostic for now. But what I can speak to is how little such trend analysis or projection matters, at least for most economic policy.  … The question of which institutions will most increase economic welfare rarely depends much on the exact values of the sorts of parameters social scientists and the media track with such enthusiasm and concern.

Me 1.4 years ago on Enable Raiders!:

A robust, properly functioning market for corporate control is vital to the performance of a free-enterprise economy. …

It is hard to exaggerate how very important this is – we’d be so much richer now if it it had long been easier for raiders to take over public firms. We now put many inexcusable obstacles (listed below) before such raiders, including disclosure, super-majority, poison pill, and merging delay rules.

Today’s Post:

Growing income disparity in the United States … has reached levels not seen since the Great Depression. In 2008, … the [140,000 member] top 0.1 percent of earners make about $1.7 million or more, including capital gains. Of those, 41 percent were executives, managers and supervisors at non-financial companies, … with nearly half of them deriving most of their income from their ownership in privately-held firms. An additional 18 percent were managers at financial firms or financial professionals at any sort of firm. …

In 1975 … the top 0.1 percent of earners garnered about 2.5 percent of the nation’s income, including capital gains. …. By 2008, that share had quadrupled and stood at 10.4 percent. … The share of the income commanded by the top 0.01 percent rose from 0.85 percent to 5.03 percent over that period. For the 15,000 families in that group, average income now stands at $27 million.

The inquiring mind is surely curious to know who exactly are today’s super-rich, and how much richer they are now than before. But good policy is mostly about good institutions, which just shouldn’t depend much on such parameters. If you worry that managers get paid more than they contribute to firm value, a robust solution is to strengthen competition for corporate control, so raiders can takeover and then fire overpaid managers. Trying to independently determine manager contribution is far far harder.

If you worry instead about how much managers respond to taxes by reducing their efforts or moving to other jurisdictions, that also probably doesn’t depend much on just how rich they are or how much that has changed in recent decades. Wanting to tax managers more because you learned that they made more money than you thought seems much more like envy than neutral efficiency analysis.

GD Star Rating
loading...
Tagged as: , , ,

I’m A Sim, Or You Aren’t

The simulation argument says that if our future descendants create enough (detailed) computer simulations of their ancestors, then you and I are likely to actually be such simulations living in a simulated world, instead of being the year 2011 humans we think we are. My simple variation on this argument concludes that either 1) ordinary people are pretty surely not simulations, or 2) very interesting people pretty surely are simulations. Add one plausible assumption, and both of these claims become true!

Now for details. Here is a standard simulation argument:

The [number] of all observers in the universe with human-type experiences that are living in [entire-history] computer simulations [is p*N*H.] … Here p is the fraction of all human-level technological civilizations that manage to reach a posthuman stage, N is the average number of times a posthuman civilization runs a simulation of its entire ancestral history, and H is the average number of individuals that have lived in a civilization before it reached a posthuman stage. (more)

So if p*N > 1, then most human-type experiences are actually ancestor simulations, and hence your experience as a human is likely to actually be a simulation experience. Thus we might conclude:

At least one of three propositions is true:

  1. [p << 1] The human species is very likely to go extinct before reaching a posthuman stage
  2. [N << 1] The fraction of posthuman civilizations that are interested in running a significant number of ancestor simulations is extremely small.
  3. [p*N >> 1] We are almost certainly living in a computer simulation. (more)

However, if we call M the average number of human ancestors simulated by each posthuman civilization, then I expect  M >> N*H. That is, I expect far more simulated humans in general than those specifically in “a simulation of [the] entire ancestral history.” Today, small-scale coarse simulations are far cheaper than large-scale detailed simulations, and so we run far more of the first type than the second. I expect the same to hold for posthuman simulations of humans – most simulation resources will be allocated to simulations far smaller than an entire human history, and so most simulated humans would be found in such smaller simulations.

Furthermore I expect simulations to be quite unequal in who they simulate in great detail – pivotal “interesting” folks will be simulated in full detail far more often than ordinary folks. In fact, I’d guess they’d be simulated over a million times more often. Thus from the point of view of a very interesting person, the chances that that person is in a simulation should be more than a million times the chances from the point of view of an ordinary person. From this we can conclude that either:

  1. Ordinary people can be be pretty sure that they are not in a simulation, or
  2. Very interesting people can be pretty sure that they are in a simulation.

If, for the purpose of a blog post dramatic title, we presume (probably incorrectly) that I’m a very interesting person, and that you the reader are an ordinary person, then the conclusion becomes: I’m a sim, or you are not.

Furthermore, both of these statements would apply if:

  • p*M, the expected number of simulated humans per human civilization, is within a factor F (e.g., a thousand) of the number of actual humans H, and if
  • interesting folks are simulated more than F2 as often as ordinary folks.

So unless p*M is so different from H that everyone can be pretty sure they are a simulation, or pretty sure that they are not, ordinary people can be sure they are not while very interesting people can be sure they are.

GD Star Rating
loading...
Tagged as: ,

Wolfers Gets Loopy

Over the years I’ve not only met folks who do drugs, I’ve met folks who’ve had deep mystical experiences on drugs. They have told me that their drug experiences made them feel sure the physical world we see around us just can’t be all there is — they’ve touched something deeper and more important. When asked how exactly a weird drug experiences could possibly count as evidence on basic physics, they have little coherent to say. It seems their subconscious just told them this abstract conclusion, and they can’t not believe a cocksure subconscious. Even one on drugs.

Druggies might say such things in private, but it is much rarer to hear a professional physicist say them in public. Odd then to hear professional economist Justin Wolfers say his near-mystical parenting experience makes him doubt standard econ:

I learned economics in my twenties, before I became a dad. … Hard math and complex models … exploring the basic idea … that people are purposeful, analytic decision makers. … I had always believed in the analytic self; I was rational, calculating, and tried to make smart decisions. Of course real people don’t use math, but I figured that we’re still weighing costs and benefits just as our models say. …

Today, I’m not so sure. My feelings toward my daughter Matilda aren’t easily expressed in analytic terms. … Her laugh is the greatest joy, and it thrills me that she shares it with me. … She’s central not only to my life, but to who I am. There’s something new and strange about all this. Today, I feel the powerful force of biology. It’s visceral; it’s real; it’s hormonal, and it’s not in our economic models. I’m helpless in the face of feelings that overwhelm me.

Yes, I know that a twenty-something reader will cleverly point out that I just need to count kids as a good which yields utility, or perhaps we need to add a state variable to the utility function as in rational addiction models. But that’s not the point. I’m surprised by how little of this I’ve consciously chosen. While the economic framework accurately describes how I choose an apple over an orange, it has had surprisingly little to say about what has been the most important choice in my life.

I’m a committed neoclassical economist. … But what kind of economists would we be if we learned our economics only after we were parents? It’s an interesting thought experiment, and truth is, I don’t know the answer. … Slivers of evidence—my own introspection, conversations with other economist-parents … —all tell me that it would be different. (more)

I don’t need to speculate – I am exactly that kind of economist. I started econ grad school with two kids, ages 0 and 2, and had no undergrad econ. I’ve seen a lot of the parenting cycle – my youngest graduates from high school tomorrow. My kids are central to who I am, and I’ve known well feelings that are visceral, hormonal, and that overwhelm me.

But none of that makes me doubt the value of neoclassical econ. How could it? First, econ makes sense of a complex social world by leaving important things out, on purpose – that is the point of models, to be simple enough to understand. More important, econ models almost never say anything about consciousness or emotional mood – they don’t at all assume people choose via a cold calculating mindset, or even that they choose consciously.  As long as choices (approximately) fit certain consistency axioms, then some utility function captures them.  So how could discovering emotional and unconscious choices possibly challenge such models?

Having an emotional parenting experience is as irrelevant to the value of neoclassical econ as having a mystical drug experience is to the validity of basic physics. Your subconscious might claim otherwise, but really, you don’t have to believe it.

Added 11p: Wolfers is usually an excellent economist, and here he seems to realize he is acting a bit loopy. This suggests a “religious” scenario, where someone tries to show devotion via a willingness to believe extreme things. Wolfers feels a new strong attachment to his family, and shows it by a willingness to change related beliefs in an extreme way. Being an economist, one of the biggest beliefs he can sacrifice on this altar is his belief in the standard economic framework. So Wolfers says that his new family attachment has made him question this framework.

Added 22June: Wolfers responds here.

GD Star Rating
loading...
Tagged as: , ,

Space vs. Time Genocide

Consider two possible “genocide” scenarios:

  • Space Genocide – We expect the galaxy to have many diverse civilizations, with diverse behaviors and values, though we don’t know much about them. Their expansion tendencies would naturally lead to a stalemate, with different civilizations controlling different parts of the galaxy. Imagine, however, that it turns out we luckily have a chance to suddenly destroy all other civilizations in the galaxy, so that our civilization can expand to take it all over. (Other galaxies remain unchanged.) Let this destruction process be mild, such as sudden unanticipated death or a sterility allowing one last generation to live out its life. There is a modest (~5%) chance we will fail and if we fail all civilizations in the galaxy are destroyed. Should we try this option?
  • Time Genocide – As their tech and environments changed, our distant ancestors evolved differing basic behaviors and values to match. We expect that our distant descendants will also naturally evolve different basic behaviors and values to match their changing tech and environments. Imagine, however, that it turns out we luckily have a chance to suddenly prevent any change in basic behaviors and values of our descendants from this day forward. If we succeed, we prevent the existence of descendants with differing basic behaviors and values, replacing them with creatures much like us. There is a modest (~5%) chance we will fail and if we fail all our descendants will be destroyed or exist in a mostly worthless state. Should we try this option?

Probably, more people can accept or recommend time genocide than space genocide, even if success in both scenarios prevents the existence of a similar number of relatively alien creatures, to be replaced by a similar number of creatures more like us. This seems related to our tending to admire time-stretched civilizations (e.g., Rivendale) more than space-stretched civilizations (e.g., Trantor), even though space-stretched ones seem objectively more prosperous. But what exactly is the relation?

The common thread, I suspect, is that the far future seems more far, in near/far concrete/abstract terms, than situations far away in space, or in the far past. The near/far distinction was first noticed in how people treated the future differently, and our knowing especially little detail about the future makes it especially easy to slip into abstract thought about the future.

As we are less practical, more idealistic, and more uncompromising in far mode, we see civilizations time-stretched into the future as more ideal, and we are more willing to commit genocide to achieve our ideals regarding such a civilization, even at a substantial risk.

Of course the future isn’t actually any less detailed than the past or places far away in space. And there isn’t any good reason to hold the far future to higher ideals now than we’d be inclined to want when the future actually arrives. If so, time-genocide should be no more morally acceptable than space-genocide. Beware the siren song of shiny far future thought.

GD Star Rating
loading...
Tagged as: , , ,