Tag Archives: Disaster

Stock Vs. Flow War

When our farmer ancestors warred, they often went about as far as they could to apply all available resources to their war efforts. This included converting plowshares into swords, ships into navies, farmers into soldiers, granaries into soldiers on the move, good will into allies, and cash into foreign purchases. When wars went long and badly, such resources were often quite depleted by the end. Yet warring farmers only rarely went extinct. Why?

The distinction between stock and flow is a basic one in engineering and finance. Stocks allow flows. A granary is a stock, and it can produce a flow of grain to eat, but that flow will end if the stock is not sufficiently replenished with every harvest. A person is a stock, which can produce work every week, but to make that last we need to create and train new people. Many kinds of stocks have limits on the flows they can produce. While you might be able to pull grain from a granary as fast as you like, you can only pull one hour of work from a worker per hour.

Natural limits on the flows that our stocks can produce have in the past limited the destructiveness of war. Even when war burned the crops, knocked down stone buildings, and killed most of the people, farmland usually bounced back in a few years, and human and animal populations could grow back in a few generations. Stones were restacked to make new buildings. The key long-term stocks of tech and culture were preserved, allowing for a quick rebuilding of previous professions, towns, and trade routes.

Future technologies are likely to have weaker limits on the conversion of stocks into flows. When we have more fishing boats we can more quickly deplete the stock of fish. Instead of water wheels that must wait for water to come down a stream, we make dams that give us water when we want. When we tap oil wells instead of killing whales for oil, the rate at which we can extract oil grows with the size and number of our wells. Eventually we may tap the sun itself not just by basking in its sunlight, but by uplifting its material and running more intense fusion reactors.

Our stronger abilities to turn stocks into flows can be great in peacetime, but they are problematic in wartime. Yes, the side with stronger abilities gains an advantage in war, but after a fierce war the stocks will be lower. Thus improving technology is making war more destructive, not just by blowing up more with each bomb, but by allowing more resources to be tapped more quickly to support war efforts.

This is another way of saying what I was trying to say in my last post: improving tech can make war more destructive, increasing the risk of extinction via war. When local nature was a key stock, diminishing returns in extracting resources from nature limited how much we could destroy during total war. In contrast, when resources can be extracted as fast and easy as grain from a granary, war is more likely to take nearly all of the resources.

Future civilization should make resources more accessible, not just to extract more kinds of slow flows, but also to extract fast flows more cheaply. While this will make it easier to flexibly use such stocks in peacetime, it also suggests a faster depletion of stocks during total war. Only the stocks that cannot be depleted, like technology and culture, may remain. And once the sun is available as a rapidly depletable resource, it may not take many total wars to deplete it.

This seems to me our most likely future great filter, and thus extinction risk. War becomes increasingly destructive, erasing stocks that are not fully replenished between wars, and often taking us to the edge of a small fragile population that could be further reduced by other disasters. And if the dominant minds and cultures speed up substantially, as I expect, that might speed up the cycle of war, allowing less time to recover between total wars.

GD Star Rating
Tagged as: , ,

Beware General Visible Prey

Charles Stross recently on possible future great filters:

So IO9 ran a piece by George Dvorsky on ways we could wreck the solar system. And then Anders Sandberg responded in depth on the subject of existential risks, asking what conceivable threats have big enough spatial reach to threaten an interplanetary or star-faring civilization. … The implication of an [future great filter] is that it doesn’t specifically work against life, it works against interplanetary colonization. … much as Kessler syndrome could effectively block all access to low Earth orbit as a side-effect of carelessly launching too much space junk. Here are some example scenarios: …

Simplistic warfare: … Today’s boringly old-hat chemical rockets, even in the absence of nuclear warheads, are formidably destructive weapons. … War, or other resource conflicts, within a polity capable of rapid interplanetary or even slow interstellar flight, is a horrible prospect.

Irreducible complexity: I take issue with one of Anders’ assumptions, which is that a multi-planet civilization is … not just … distributed, but it will almost by necessity have fairly self-sufficient habitats that could act as seeds for a new civilization if they survive. … I doubt that we could make a self-sufficient habitat that was capable of maintaining its infrastructure and perpetuating and refreshing its human culture with a population any smaller than high-single-digit millions. … Building robust self-sufficient off-world habitats … is vastly more expensive than building an off-world outpost and shipping rations there, as we do with Antarctica. …

Griefers: … All it takes is one civilization of alien ass-hat griefers who send out just one Von Neumann Probe programmed to replicate, build N-D lasers, and zap any planet showing signs of technological civilization, and the result is a galaxy sterile of interplanetary civilizations until the end of the stelliferous era. (more)

These are indeed scenarios of concern. But I find it hard to see how, by themselves, they could add up to a big future filter. Continue reading "Beware General Visible Prey" »

GD Star Rating
Tagged as: , , ,

One In A Billion?

At CATO Unbound this month, David Brin’s lead essay makes two points:

  1. We probably shouldn’t send messages out to aliens now on purpose, and more surely we shouldn’t let each group decide for themselves if to send.
  2. The lack of visible aliens may be explained in part via a strong tendency of all societies to become “feudal”, with elites “suppressing merit competition and mobility, ensuring that status would be inherited” and resulting in “scientific stagnation.”

In my official response at CATO Unbound, I focus on the first issue, agreeing with Brin, and responding to a common counter-argument, namely that we now yell to aliens far more by accident than on purpose. I ask if we should cut back on accidental yelling, which we now do most loudly via the Arecibo planetary radar. Using the amount we spend on Arecibo yelling to estimate the value we get there, I conclude:

We should cut way back on accidental yelling to aliens, such as via Arecibo radar sending, if continuing at current rates would over the long run bring even a one in a billion chance of alerting aliens to come destroy us. And even if this chance is now below one in a billion, it will rise with time and eventually force us to cut back. So let’s start now to estimate such risks, and adapt our behavior accordingly. (more)

As an aside, I also note:

I’m disturbed to see that a consensus apparently arose among many in this area that aliens must be overwhelmingly friendly. Most conventional social scientists I know would find this view quite implausible; they see most conflict as deeply intractable. Why is this kind-aliens view then so common?

My guess: non-social-scientists have believed modern cultural propaganda claims that our dominant cultures today have a vast moral superiority over most other cultures through history. Our media have long suggested that conflictual behaviors like greed, theft, aggression, revenge, violence, war, destruction of nature, and population growth pressures all result from “backward” mindsets from “backward” cultures.

GD Star Rating
Tagged as: ,

Hope For A Lumpy Filter

The great filter is the sum total of all of the obstacles that stand in the way of a simple dead planet (or similar sized material) proceeding to give rise to a cosmologically visible civilization. As there are 280 stars in the observable universe, and 260 within a billion light years, a simple dead planet faces at least roughly 60 to 80 factors of two obstacles to birthing a visible civilization within 13 billion years. If there is panspermia, i.e., a spreading of life at some earlier stage, the other obstacles must be even larger by the panspermia life-spreading factor.

We know of a great many possible candidate filters, both in our past and in our future. The total filter could be smooth, i.e. spread out relatively evenly among all of these candidates, or it could be lumpy, i.e., concentrated in only one or a few of these candidates. It turns out that we should hope for the filter to be lumpy.

For example, imagine that there are 15 plausible filter candidates, 10 in our past and 5 in our future. If the filter is maximally smooth, then given 60 total factors of two, each candidate would have four factors of two, leaving twenty in our future, for a net chance for us now of making it through the rest of the filter of only one in a million. On the other hand, if the filter is maximally lumpy, and all concentrated in only one random candidate, then we have a 2/3 chance of facing no filter at all in our future. Thus a lumpy filter gives us a much better chance of making it.

For “try-try” filters, a system can keep trying over and over until it succeeds. If a set of try-try steps must all succeed within the window of life on Earth, then the actual times to complete each step must be drawn from the same distribution, and so take similar times. The time remaining after the last step must also be drawn from a similar distribution.

A year ago I reported on a new study estimating that 1.75 to 3.25 billion years remains for life on Earth. This is a long time, and implies that there can’t be many prior try-try filter steps within the history of life on Earth. Only one or two, and none in the last half billion years. This suggests that the try-try part of the great filter is relatively lumpy, at least for the parts that have and will take place on Earth. Which according to the analysis above is good news.

Of course there can be other kinds of filter steps. For example, perhaps life has to hit on the right sort of genetic code right from the start; if life hits on the wrong code, life using that code will entrench itself too strongly to let the right sort of life take over. These sort of filter steps need not be roughly evenly distributed in time, and so timing data doesn’t say much about how lumpy or uniform are those steps.

It is nice to have some good news. Though I should also remind you of the bad news that anthropic analysis suggests that selection effects make future filters more likely than you would have otherwise thought.

GD Star Rating
Tagged as: ,

Great Filter TEDx

This Saturday I’ll speak on the great filter at TEDx Limassol in Cyprus. Though I first wrote about the subject in 1996, this is actually the first time I’ve been invited to speak on it. It only took 19 years. I’ll post links here to slides and video when available.

Added 22Sep: A preliminary version of the video can be found here starting at minute 34.

Added 12Dec: The video is finally up:

GD Star Rating
Tagged as: , ,

Adam Ford & I on Great Filter

Adam Ford interviewed me again, this time on the Great Filter:

We have three main sources of info on existential risks (xrisks):

  1. Inside View Analysis – where we try to use our best theories to reason about particular causal processes.
  2. Earth Track Records – the empirical distribution of related events observed so far on Earth.
  3. The Great Filter – inferences from the fact that the universe looks dead everywhere but here.

These sources are roughly equally informative. #2 suggests xrisks are low, even if high enough to deserve much effort to prevent them. I’d say that most variations on #1 suggest the same. However, #3 suggests xrisks could be very high, which should encourage more xrisk-mitigation efforts.

Ironically most xrisk efforts (of which I’m aware) focus on AI-risk, which can’t explain the great filter. Most analysis efforts also focus on #1, less on #2, and almost none on #3.

GD Star Rating
Tagged as: , ,

Speculators Foresee No Catastrophe

In the latest American Economic Journal, Pindyck and Wang work out what financial prices and their fluctuations suggest about what speculators believe to be the chances of big economic catastrophes. Bottom line: [simple models that estimate the beliefs of] speculators see very low chances of really big disasters. (Quotes below.)

For example, they find that over fifty years speculators see a 57% chance of a sudden shock destroying at least 15% of capital. If I apply their estimated formula to questions they didn’t ask in the paper, I find that over two centuries, speculators see only a 1.6 in a hundred thousand chance of a shock that destroys over half of capital. And a shock destroying 80% or more of capital has only a one in a hundred trillion chance. Of course these would all be lamentable, and very newsworthy. But hardly existential risks.

The authors do note that others have estimated a thicker tail of bad events:

We obtain … a value for the [power] α of 23.17. … Barro and Jin (2009) … estimated α [emprically] for their sample of contractions. In our notation, their estimates of α were 6.27 for consumption contractions and 6.86 for GDP.

If I plug in the worst of these, I find that over two centuries there’s an 85% chance of a 50% shock, a 0.6% chance of an 80% shock, and one in a million chance of a shock that destroys 95% or more of capital. Much worse chances, but still nothing like an existential risk.

Of course speculative markets wouldn’t price in the risk of extinction, since all assets and investors are destroyed in those events. But how likely could extinction really be if there’s almost no chance of an event that destroys 95% of capital?

Added 11a: They use a power law to fit price changes, and so would miss ways in which very big disasters have a different distribution than small disasters. But to the extent that this does accurately model speculator beliefs, if you disagree you should expect to profit by buying options that pay off mainly in the case of huge disasters. So why aren’t you buying?

Those promised quotes: Continue reading "Speculators Foresee No Catastrophe" »

GD Star Rating
Tagged as: ,

Foom Debate, Again

My ex-co-blogger Eliezer Yudkowsky last June:

I worry about conversations that go into “But X is like Y, which does Z, so X should do reinterpreted-Z”. Usually, in my experience, that goes into what I call “reference class tennis” or “I’m taking my reference class and going home”. The trouble is that there’s an unlimited number of possible analogies and reference classes, and everyone has a different one. I was just browsing old LW posts today (to find a URL of a quick summary of why group-selection arguments don’t work in mammals) and ran across a quotation from Perry Metzger to the effect that so long as the laws of physics apply, there will always be evolution, hence nature red in tooth and claw will continue into the future – to him, the obvious analogy for the advent of AI was “nature red in tooth and claw”, and people who see things this way tend to want to cling to that analogy even if you delve into some basic evolutionary biology with math to show how much it isn’t like intelligent design. For Robin Hanson, the one true analogy is to the industrial revolution and farming revolutions, meaning that there will be lots of AIs in a highly competitive economic situation with standards of living tending toward the bare minimum, and this is so absolutely inevitable and consonant with The Way Things Should Be as to not be worth fighting at all. That’s his one true analogy and I’ve never been able to persuade him otherwise. For Kurzweil, the fact that many different things proceed at a Moore’s Law rate to the benefit of humanity means that all these things are destined to continue and converge into the future, also to the benefit of humanity. For him, “things that go by Moore’s Law” is his favorite reference class.

I can have a back-and-forth conversation with Nick Bostrom, who looks much more favorably on Oracle AI in general than I do, because we’re not playing reference class tennis with “But surely that will be just like all the previous X-in-my-favorite-reference-class”, nor saying, “But surely this is the inevitable trend of technology”; instead we lay out particular, “Suppose we do this?” and try to discuss how it will work, not with any added language about how surely anyone will do it that way, or how it’s got to be like Z because all previous Y were like Z, etcetera. (more)

When we shared this blog, Eliezer and I had a long debate here on his “AI foom” claims. Later, we debated in person once. (See also slides 34,35 of this 3yr-old talk.) I don’t accept the above as characterizing my position well. I’ve written up a summaries before, but let me try again, this time trying to more directly address the above critique.

Eliezer basically claims that the ability of an AI to change its own mental architecture is such a potent advantage as to make it likely that a cheap unnoticed and initially low ability AI (a mere “small project machine in a basement”) could without warning over a short time (e.g., a weekend) become so powerful as to be able to take over the world.

As this would be a sudden big sustainable increase in the overall growth rate in the broad capacity of the world economy, I do find it useful to compare to compare this hypothesized future event to the other pasts events that produce similar outcomes, namely a big sudden sustainable global broad capacity rate increase. The last three were the transitions to humans, farming, and industry.

I don’t claim there is some hidden natural law requiring such events to have the same causal factors or structure, or to appear at particular times. But I do think these events suggest a useful if weak data-driven prior on the kinds of factors likely to induce such events, on the rate at which they occur, and on their accompanying inequality in gains. In particular, they tell us that such events are very rare, that over the last three events gains have been spread increasingly equally, and that these three events seem mainly due to better ways to share innovations.

Eliezer sees the essence of his scenario as being a change in the “basic” architecture of the world’s best optimization process, and he sees the main prior examples of this as the origin of natural selection and the arrival of humans. He also sees his scenario as differing enough from the other studied growth scenarios as to make analogies to them of little use.

However, since most global bio or econ growth processes can be thought of as optimization processes, this comes down to his judgement on what counts as a “basic” structure change, and on how different such scenarios are from other scenarios. And in my judgement the right place to get and hone our intuitions about such things is our academic literature on global growth processes.

Economists have a big literature on processes by which large economies grow, increasing our overall capacities to achieve all the things we value. There are of course many other growth literatures, and some of these deal in growths of capacities, but these usually deal with far more limited systems. Of these many growth literatures it is the economic growth literature that is closest to dealing with the broad capability growth posited in a fast growing AI scenario.

It is this rich literature that seems to me the right place to find and hone our categories for thinking about growing broadly capable systems. One should review many formal theoretical models, and many less formal applications of such models to particular empirical contexts, collecting “data” points of what is thought to increase or decrease growth of what in what contexts, and collecting useful categories for organizing such data points.

With such useful categories in hand one can then go into a new scenario such as AI foom and have a reasonable basis for saying how similar that new scenario seems to old scenarios, which old scenarios it seems most like if any, and which parts of that new scenario are central vs. peripheral. Yes of course if this new area became mature it could also influence how we think about other scenarios.

But until we actually see substantial AI self-growth, most of the conceptual influence should go the other way. Relying instead primarily on newly made up categories and similarity maps between them, concepts and maps which have not been vetted or honed in dealing with real problems, seems to me a mistake. Yes of course a new problem may require one to introduce some new concepts to describe it, but that is hardly the same as largely ignoring old concepts.

So, I fully grant that the ability of AIs to intentionally change mind designs would be a new factor in the world, and it could make a difference for AI ability to self-improve. But while the history of growth over the last few million years has seen many dozens of factors come and go, or increase and decrease in importance, it has only seen three events in which overall growth rates greatly increased suddenly and sustainably. So the mere addition of one more factor seems unlikely to generate foom, unless our relevant categories for growth causing factors suggest that this factor is unusually likely to have such an effect.

This is the sense in which I long ago warned against over-reliance on “unvetted” abstractions. I wasn’t at all trying to claim there is one true analogy and all others are false. Instead, I argue for preferring to rely on abstractions, including categories and similarity maps, that have been found useful by a substantial intellectual community working on related problems. On the subject of an AI growth foom, most of those abstractions should come from the field of economic growth.

GD Star Rating
Tagged as: , , , ,

Future Filter Fatalism

One of the more colorful vignettes in philosophy is Gibbard and Harper’s “Death in Damascus” case:

Consider the story of the man who met Death in Damascus. Death looked surprised, but then recovered his ghastly composure and said, ‘I am coming for you tomorrow’. The terrified man that night bought a camel and rode to Aleppo. The next day, Death knocked on the door of the room where he was hiding, and said I have come for you’.

‘But I thought you would be looking for me in Damascus’, said the man.

‘Not at all’, said Death ‘that is why I was surprised to see you yesterday. I knew that today I was to find you in Aleppo’.

That is, Death’s foresight takes into account any reactions to Death’s activities.

Now suppose you think that a large portion of the Great Filter lies ahead, so that almost all civilizations like ours fail to colonize the stars. This implies that civilizations almost never adopt strategies that effectively avert doom and allow colonization. Thus the mere fact that we adopt any purported Filter-avoiding strategy S is strong evidence that S won’t work, just as the fact that you adopt any particular plan to escape Death indicates that it will fail.

To expect S to work we would have to be very confident that we were highly unusual in adopting S (or any strategy as good as S), in addition to thinking S very good on the merits. This burden might be met if it was only through some bizarre fluke that S became possible, and a strategy might improve our chances even though we would remain almost certain to fail, but common features, such as awareness of the Great Filter, would not suffice to avoid future filters.

GD Star Rating
Tagged as: , ,

Silly Mayans

In my morning paper, today’s possible apocalypse was mentioned in five comics, but no where else. I’ve heard many mention the issue of the last few weeks, but mostly mocking it; none seem remotely concerned. Why so many mentions of something so few believe? To mock it of course – to enjoy feeling superior to fools who take such things seriously.

So are we ridiculing only those who fear apocalypse based on ancient predictions, or all who fear apocalypse? Alas, as I’ve discussed before, it seems we ridicule all of them:

On average, survivalists tend to display undesirable characteristics. They tend to have extreme and unrealistic opinions, that disaster soon has an unrealistically high probability. They also show disloyalty and a low opinion of their wider society, by suggesting it is due for a big disaster soon. They show disloyalty to larger social units, by focusing directly on saving their own friends and family, rather than focusing on saving those larger social units. And they tend to be cynics, with all that implies. (more)

Over the years I’ve met many folks who say they are concerned about existential risk, but I have yet to see any of them do anything concrete and physical about it. They talk, write, meet, and maybe write academic papers, but seem quite averse to putting one brick on top of another, or packing away an extra bag of rice. Why?

Grand disaster is unlikely, happens on a large scope, and probably far away in time, all of which brings on a very far view, wherein abstract talk seems more apt than concrete action. Also, since far views are more moral and idealistic, people seem especially offended about folks preparing selfishly for disaster, and especially keen to avoid that appearance, even at the expense of not preparing.

This seems related to the wide-spread rejection of cryonics in a world that vastly overspends on end of life medicine; more folks pay a similar amount to launch their ashes into space than try to extend life via cryonics. The idea of trying to avoid the disaster of death by returning in a distant future also invokes a far view, wherein we more condemn selfish acts and leaving-the-group betrayal, are extra confident in theories saying it won’t work, and feel only weak motivations to improve things.

GD Star Rating
Tagged as: ,