Tag Archives: Disaster

Adam Ford & I on Great Filter

Adam Ford interviewed me again, this time on the Great Filter:

We have three main sources of info on existential risks (xrisks):

  1. Inside View Analysis – where we try to use our best theories to reason about particular causal processes.
  2. Earth Track Records – the empirical distribution of related events observed so far on Earth.
  3. The Great Filter – inferences from the fact that the universe looks dead everywhere but here.

These sources are roughly equally informative. #2 suggests xrisks are low, even if high enough to deserve much effort to prevent them. I’d say that most variations on #1 suggest the same. However, #3 suggests xrisks could be very high, which should encourage more xrisk-mitigation efforts.

Ironically most xrisk efforts (of which I’m aware) focus on AI-risk, which can’t explain the great filter. Most analysis efforts also focus on #1, less on #2, and almost none on #3.

GD Star Rating
loading...
Tagged as: , ,

Speculators Foresee No Catastrophe

In the latest American Economic Journal, Pindyck and Wang work out what financial prices and their fluctuations suggest about what speculators believe to be the chances of big economic catastrophes. Bottom line: [simple models that estimate the beliefs of] speculators see very low chances of really big disasters. (Quotes below.)

For example, they find that over fifty years speculators see a 57% chance of a sudden shock destroying at least 15% of capital. If I apply their estimated formula to questions they didn’t ask in the paper, I find that over two centuries, speculators see only a 1.6 in a hundred thousand chance of a shock that destroys over half of capital. And a shock destroying 80% or more of capital has only a one in a hundred trillion chance. Of course these would all be lamentable, and very newsworthy. But hardly existential risks.

The authors do note that others have estimated a thicker tail of bad events:

We obtain … a value for the [power] α of 23.17. … Barro and Jin (2009) … estimated α [emprically] for their sample of contractions. In our notation, their estimates of α were 6.27 for consumption contractions and 6.86 for GDP.

If I plug in the worst of these, I find that over two centuries there’s an 85% chance of a 50% shock, a 0.6% chance of an 80% shock, and one in a million chance of a shock that destroys 95% or more of capital. Much worse chances, but still nothing like an existential risk.

Of course speculative markets wouldn’t price in the risk of extinction, since all assets and investors are destroyed in those events. But how likely could extinction really be if there’s almost no chance of an event that destroys 95% of capital?

Added 11a: They use a power law to fit price changes, and so would miss ways in which very big disasters have a different distribution than small disasters. But to the extent that this does accurately model speculator beliefs, if you disagree you should expect to profit by buying options that pay off mainly in the case of huge disasters. So why aren’t you buying?

Those promised quotes: Continue reading "Speculators Foresee No Catastrophe" »

GD Star Rating
loading...
Tagged as: ,

Foom Debate, Again

My ex-co-blogger Eliezer Yudkowsky last June:

I worry about conversations that go into “But X is like Y, which does Z, so X should do reinterpreted-Z”. Usually, in my experience, that goes into what I call “reference class tennis” or “I’m taking my reference class and going home”. The trouble is that there’s an unlimited number of possible analogies and reference classes, and everyone has a different one. I was just browsing old LW posts today (to find a URL of a quick summary of why group-selection arguments don’t work in mammals) and ran across a quotation from Perry Metzger to the effect that so long as the laws of physics apply, there will always be evolution, hence nature red in tooth and claw will continue into the future – to him, the obvious analogy for the advent of AI was “nature red in tooth and claw”, and people who see things this way tend to want to cling to that analogy even if you delve into some basic evolutionary biology with math to show how much it isn’t like intelligent design. For Robin Hanson, the one true analogy is to the industrial revolution and farming revolutions, meaning that there will be lots of AIs in a highly competitive economic situation with standards of living tending toward the bare minimum, and this is so absolutely inevitable and consonant with The Way Things Should Be as to not be worth fighting at all. That’s his one true analogy and I’ve never been able to persuade him otherwise. For Kurzweil, the fact that many different things proceed at a Moore’s Law rate to the benefit of humanity means that all these things are destined to continue and converge into the future, also to the benefit of humanity. For him, “things that go by Moore’s Law” is his favorite reference class.

I can have a back-and-forth conversation with Nick Bostrom, who looks much more favorably on Oracle AI in general than I do, because we’re not playing reference class tennis with “But surely that will be just like all the previous X-in-my-favorite-reference-class”, nor saying, “But surely this is the inevitable trend of technology”; instead we lay out particular, “Suppose we do this?” and try to discuss how it will work, not with any added language about how surely anyone will do it that way, or how it’s got to be like Z because all previous Y were like Z, etcetera. (more)

When we shared this blog, Eliezer and I had a long debate here on his “AI foom” claims. Later, we debated in person once. (See also slides 34,35 of this 3yr-old talk.) I don’t accept the above as characterizing my position well. I’ve written up a summaries before, but let me try again, this time trying to more directly address the above critique.

Eliezer basically claims that the ability of an AI to change its own mental architecture is such a potent advantage as to make it likely that a cheap unnoticed and initially low ability AI (a mere “small project machine in a basement”) could without warning over a short time (e.g., a weekend) become so powerful as to be able to take over the world.

As this would be a sudden big sustainable increase in the overall growth rate in the broad capacity of the world economy, I do find it useful to compare to compare this hypothesized future event to the other pasts events that produce similar outcomes, namely a big sudden sustainable global broad capacity rate increase. The last three were the transitions to humans, farming, and industry.

I don’t claim there is some hidden natural law requiring such events to have the same causal factors or structure, or to appear at particular times. But I do think these events suggest a useful if weak data-driven prior on the kinds of factors likely to induce such events, on the rate at which they occur, and on their accompanying inequality in gains. In particular, they tell us that such events are very rare, that over the last three events gains have been spread increasingly equally, and that these three events seem mainly due to better ways to share innovations.

Eliezer sees the essence of his scenario as being a change in the “basic” architecture of the world’s best optimization process, and he sees the main prior examples of this as the origin of natural selection and the arrival of humans. He also sees his scenario as differing enough from the other studied growth scenarios as to make analogies to them of little use.

However, since most global bio or econ growth processes can be thought of as optimization processes, this comes down to his judgement on what counts as a “basic” structure change, and on how different such scenarios are from other scenarios. And in my judgement the right place to get and hone our intuitions about such things is our academic literature on global growth processes.

Economists have a big literature on processes by which large economies grow, increasing our overall capacities to achieve all the things we value. There are of course many other growth literatures, and some of these deal in growths of capacities, but these usually deal with far more limited systems. Of these many growth literatures it is the economic growth literature that is closest to dealing with the broad capability growth posited in a fast growing AI scenario.

It is this rich literature that seems to me the right place to find and hone our categories for thinking about growing broadly capable systems. One should review many formal theoretical models, and many less formal applications of such models to particular empirical contexts, collecting “data” points of what is thought to increase or decrease growth of what in what contexts, and collecting useful categories for organizing such data points.

With such useful categories in hand one can then go into a new scenario such as AI foom and have a reasonable basis for saying how similar that new scenario seems to old scenarios, which old scenarios it seems most like if any, and which parts of that new scenario are central vs. peripheral. Yes of course if this new area became mature it could also influence how we think about other scenarios.

But until we actually see substantial AI self-growth, most of the conceptual influence should go the other way. Relying instead primarily on newly made up categories and similarity maps between them, concepts and maps which have not been vetted or honed in dealing with real problems, seems to me a mistake. Yes of course a new problem may require one to introduce some new concepts to describe it, but that is hardly the same as largely ignoring old concepts.

So, I fully grant that the ability of AIs to intentionally change mind designs would be a new factor in the world, and it could make a difference for AI ability to self-improve. But while the history of growth over the last few million years has seen many dozens of factors come and go, or increase and decrease in importance, it has only seen three events in which overall growth rates greatly increased suddenly and sustainably. So the mere addition of one more factor seems unlikely to generate foom, unless our relevant categories for growth causing factors suggest that this factor is unusually likely to have such an effect.

This is the sense in which I long ago warned against over-reliance on “unvetted” abstractions. I wasn’t at all trying to claim there is one true analogy and all others are false. Instead, I argue for preferring to rely on abstractions, including categories and similarity maps, that have been found useful by a substantial intellectual community working on related problems. On the subject of an AI growth foom, most of those abstractions should come from the field of economic growth.

GD Star Rating
loading...
Tagged as: , , , ,

Future Filter Fatalism

One of the more colorful vignettes in philosophy is Gibbard and Harper’s “Death in Damascus” case:

Consider the story of the man who met Death in Damascus. Death looked surprised, but then recovered his ghastly composure and said, ‘I am coming for you tomorrow’. The terrified man that night bought a camel and rode to Aleppo. The next day, Death knocked on the door of the room where he was hiding, and said I have come for you’.

‘But I thought you would be looking for me in Damascus’, said the man.

‘Not at all’, said Death ‘that is why I was surprised to see you yesterday. I knew that today I was to find you in Aleppo’.

That is, Death’s foresight takes into account any reactions to Death’s activities.

Now suppose you think that a large portion of the Great Filter lies ahead, so that almost all civilizations like ours fail to colonize the stars. This implies that civilizations almost never adopt strategies that effectively avert doom and allow colonization. Thus the mere fact that we adopt any purported Filter-avoiding strategy S is strong evidence that S won’t work, just as the fact that you adopt any particular plan to escape Death indicates that it will fail.

To expect S to work we would have to be very confident that we were highly unusual in adopting S (or any strategy as good as S), in addition to thinking S very good on the merits. This burden might be met if it was only through some bizarre fluke that S became possible, and a strategy might improve our chances even though we would remain almost certain to fail, but common features, such as awareness of the Great Filter, would not suffice to avoid future filters.

GD Star Rating
loading...
Tagged as: , ,

Silly Mayans

In my morning paper, today’s possible apocalypse was mentioned in five comics, but no where else. I’ve heard many mention the issue of the last few weeks, but mostly mocking it; none seem remotely concerned. Why so many mentions of something so few believe? To mock it of course – to enjoy feeling superior to fools who take such things seriously.

So are we ridiculing only those who fear apocalypse based on ancient predictions, or all who fear apocalypse? Alas, as I’ve discussed before, it seems we ridicule all of them:

On average, survivalists tend to display undesirable characteristics. They tend to have extreme and unrealistic opinions, that disaster soon has an unrealistically high probability. They also show disloyalty and a low opinion of their wider society, by suggesting it is due for a big disaster soon. They show disloyalty to larger social units, by focusing directly on saving their own friends and family, rather than focusing on saving those larger social units. And they tend to be cynics, with all that implies. (more)

Over the years I’ve met many folks who say they are concerned about existential risk, but I have yet to see any of them do anything concrete and physical about it. They talk, write, meet, and maybe write academic papers, but seem quite averse to putting one brick on top of another, or packing away an extra bag of rice. Why?

Grand disaster is unlikely, happens on a large scope, and probably far away in time, all of which brings on a very far view, wherein abstract talk seems more apt than concrete action. Also, since far views are more moral and idealistic, people seem especially offended about folks preparing selfishly for disaster, and especially keen to avoid that appearance, even at the expense of not preparing.

This seems related to the wide-spread rejection of cryonics in a world that vastly overspends on end of life medicine; more folks pay a similar amount to launch their ashes into space than try to extend life via cryonics. The idea of trying to avoid the disaster of death by returning in a distant future also invokes a far view, wherein we more condemn selfish acts and leaving-the-group betrayal, are extra confident in theories saying it won’t work, and feel only weak motivations to improve things.

GD Star Rating
loading...
Tagged as: ,

Today Is Filter Day

By tracking daily news fluctuations, we can have fun, join in common conversations, and signal our abilities to track events and to quickly compose clever commentary. But for the purpose of forming accurate expectations about the world, we attend too much to such news, and neglect key constant features of our world and knowledge.

So today, let us remember one key somber and neglected fact: the universe looks very dead. Yes, there might be pockets of life hiding in small corners, but for billions of years billions of galaxies full of vast resources have been left almost entirely untouched and unused. While we seem only centuries away making a great visible use of our solar system, and a million years from doing the same to our galaxy, any life out there seems unable, uninterested, or afraid to do the same. What dark fact do they know that we do not?

Yes, it is possible that the extremely difficultly was life’s origin, or some early step, so that, other than here on Earth, all life in the universe is stuck before this early extremely hard step. But even if you find this the most likely outcome, surely given our ignorance you must also place a non-trivial probability on other possibilities. You must see a great filter as lying between initial planets and visibly expanding civilizations, and wonder how far along that filter we are. In particular, you must estimate a substantial chance of “disaster”, i.e., something destroying our ability or inclination to make a visible use of the vast resources we see. (And this disaster can’t be an unfriendly super-AI, because that should be visible.)

Assume that since none of the ~1020 planets we see has yet given rise to a visible expanding civilization, each planet has a less than one in 1020 chance of doing so. If so, what fraction of this 1020+ filter do you estimate still lies ahead of us? If that fraction were only 1/365, then we face at least a 12% chance of disaster. Which should be enough to scare you.

To make sure we take the time to periodically remember this key somber fact, I propose that today, the day before winter solstice, the darkest day of the year, be Filter Day. I pick the day before to mock the wishful optimistic estimate that only 1/365 of the total filter remains ahead of us. Perhaps if you estimate that 1/12 of the filter still lies ahead, a filter we have less than a 2% chance of surviving, you should commemorate Filter Day one month before winter solstice. But then we’d all commemorate on different days, and so may not remember to commemorate at all.

So, to keep it simple, today is Filter Day. Take a minute to look up at the dark night sky, see the vast ancient and unbroken deadlands, and be very afraid.

What other activities makes sense on Filter Day? Visit an ancient ruin? A volcano? A nuclear test site? The CDC? A telescope?

GD Star Rating
loading...
Tagged as: , ,

Miller’s Singularity Rising

James Miller, who posted once here at OB, has a new book, Singularity Rising, out Oct 2. I’ve read an advance copy. Here are my various reactions to the book.

Miller discusses several possible paths to super-intelligence, but never says which paths he thinks likely, nor when any might happen. However, he is confident that one will happen eventually, he calls Kurzweil’s 2045 forecast “robust”, and he offers readers personal advice as if something will happen in their lifetimes.

I get a lot of coverage in chapter 13, which discusses whole brain emulations. (And Katja is mentioned on pp.213-214.) While Miller focuses mostly on what emulations imply for humans, he does note that many ems could die from poverty or obsolescence. He make no overall judgement on the scenario, however, other than to once use the word “dystopian.”

While Miller’s discussion of emulations is entirely of the scenario of a large economy containing many emulations, his discussion of non-emulation AI is entirely of the scenario of a single “ultra AI”. He never considers a single ultra emulation, nor an economy of many AIs. Nor does he explain these choices.

On ultra AIs, Miller considers only an “intelligence explosion” scenario where a human level AI turns itself into an ultra AI “in a period of weeks, days, or even hours.” His arguments for this extremely short timescale are:

  1. Self-reproducing nanotech factories might double every hour,
  2. On a scale of all possible minds, a chimp isn’t far from von Neuman in intelligence, and
  3. Evolution has trouble coordinating changes, but an AI could use brain materials and structures that evolution couldn’t.

I’ve said before that I don’t see how these imply a weeks timescale for one human level AI to make itself more powerful than the entire rest of the world put together. Miller explains my skepticism:

As Hanson told me, the implausibility of some James Bond villains illustrates a reason to be skeptical of an intelligence explosion. A few of these villains had their own private islands on which they created new powerful weapons. But weapons development is a time and resource intensive task, making it extremely unlikely that the villains small team of followers could out-innovate all of the weapons developers in the rest of the world by producing spectacularly destructive instruments that no other military force possessed. Thinking that a few henchmen, even if led by an evil genius, would do a better job at weapons development than a major defense contractor is as silly as believing that the professor on Gilligan’s Island really could have created his own coconut based technology. …

Think of an innovation race between a single AI and the entirety of mankind. For an intelligence explosion to occur, the AI has to not only win the race, but finish before humanity completes its next stride. A sufficiently smart AI could certainly do this, but an AI only a bit brighter than von Neumann would not have the slightest chance of achieving this margin of victory. (pp.215-216)

As you can tell from this quotation, Miller’s book often reads like the economics textbook he wrote. He is usually content to be a tutor, explaining common positions and intuitions behind common arguments. He does, however, explain some of his personal contributions to this field, such as his argument that preventing the destruction of the world can be a public good undersupplied by private firms, and that development might slow down just before an anticipated explosion, if investors think non-investors will gain or lose just as much as investors from the change.

I’m not sure this book has much of a chance to get very popular. The competition is fierce, Miller isn’t already famous, and while his writing quality is good, it isn’t at the popular blockbuster popular book level. But I wish his book all the success it can muster.

GD Star Rating
loading...
Tagged as: , ,

Inequality /=> Revolt

Famous historical revolutions were not consistently caused by high or rising income inequality:

[French income] inequality during the eighteenth century was large but decreased during the revolutionary period (1790-1815). … When industrialisation began about 1830, inequality increased until sometime in the 1860s. (more)

In 1904, on the eve of military defeat and the 1905 Revolution, Russian income inequality was middling by the standards of that era, and less severe than inequality has become today in such countries as China, the United States, and Russia itself. (more)

In 1774 the American colonies had average incomes exceeding those of the Mother Country, even when slave households are included in the aggregate. … American colonists had much more equal incomes than did households in England and Wales around 1774. Indeed, New England and the Middle Colonies appear to have been more egalitarian than anywhere else in the measureable world. Income inequality rose dramatically between 1774 and 1860, especially in the South. (more)

So why do most people so confidently believe that revolutions were caused by high or rising inequality? I’d guess its because it feels like a nice way to affirm your support for the standard forager value of more equality.

Added 24Sept: OK, I see that the French data isn’t so relevant to my point.

GD Star Rating
loading...
Tagged as: , , ,

Rah Power Laws

The latest Science has an article by Michael Stumpf and Mason Porter, complaining that people aren’t careful enough about fitting power laws. It mentions that a sum of heavy-tail-distributed things generically becomes has a power law tail in the sum limit. And it claims:

Although power laws have been reported in areas ranging from finance and molecular
biology to geophysics and the Internet, the data are typically insufficient and the mechanistic insights are almost always too limited for the identification of power-law behavior to be scientifically useful … Examination (15) of the statistical support for numerous reported power laws has revealed that the overwhelming majority of them failed statistical testing (sometimes rather epically).

Yet in reference 15, where Aaron Clauset, Cosma Rohilla Shalizi, and M. E. J. Newman looked carefully at 25 data sets that others had claimed fit power laws, only for 3 did they find less than moderate support for a power law fit, and in none of those cases was any other specific model significantly favored over a power law! It this is the best criticism they’ve got, this seems to me resounding support for power laws.

Here are the phenomena where the power is less than one, meaning the few biggest items get most of the weight:

intensity of wars 0.7(2); solar flare intensity 0.79(2); religious followers 0.8(1); count of word use 0.95(2)

The number is the power and the digit in parens is the uncertainty of the last digit shown. Here are the phenomena where the power is greater than one, meaning most weight goes to many small items:

telephone calls received 1.09(1); bird species sightings 1.1(2); Internet degree 1.12(9); blackouts 1.3(3); population of cities 1.37(8); terrorist attack severity 1.4(2); species per genus 1.4(2); freq. of surnames 1.5(2); protein interaction degree 2.1(3); citations to papers 2.16(6); email address books size 2.5(6); sales of books 2.7(3); papers authored 3.3(1)

For quake intensity they give power 0.64(4), but say a better fit is a different power (unspecified) and a cutoff. For net worth (of the US richest 400) they give power 1.3(1), but say a power-law doesn’t fit, though no other model tried fits better.

On catastrophic risk, I wrote in ’07:

We should worry more about disasters with lower powers, such as forest fires (area power of 0.66), hurricanes (dollar loss power of 0.98, death power of 0.58), earthquakes (energy power of 1, dollar loss and death powers of 0.41), wars (death power of 0.41), and plagues (death power of 0.26 for Whooping Cough and Measles).

So the above study suggests we worry most about wars, quakes, religions, and solar flares. I hadn’t been worried about solar flares so much before; now I am. On city inequality, I think I trust that other paper more.

Added 4p: Cosma Shalizi says:

In ten of the twelve cases we looked at, the only way to save the idea of a power-law at all is to include this exponential cut-off. But that exponentially-shrinking factor is precisely what squelches the WTF, X IS ELEVENTY TIMES LARGER THAN EVER! THE BIG ONE IS IN OUR BASE KILLING OUR DOODZ!!!!1!! mega-events.

I’m happy to admit that worse case fears are reduced by the fact that <1 power law data tend to be better fit by a tail cutoff. Good news! I don’t want to believe in disaster, but I do think we must consider that possibility.

GD Star Rating
loading...
Tagged as: ,

Ignoring Small Chances

On September 9, 1713, so the story goes, Nicholas Bernoulli proposed the following problem in the theory of games of chance, after 1768 known as the St Petersburg paradox …:

Peter tosses a coin and continues to do so until it should land heads when it comes to the ground. He agrees to give Paul one ducat if he gets heads on the very first throw, two ducats if he gets it on the second, four if on the third, eight if on the fourth, and so on, so that with each additional throw the number of ducats he must pay is doubled.

Nicholas Bernoulli … suggested that more than five tosses of heads are [seen as] morally impossible [and so ignored]. This proposition is experimentally tested through the elicitation of subjects‘ willingness-to-pay for various truncated versions of the Petersburg gamble that differ in the maximum payoff. … All gambles that involved probability levels smaller than 1/16 and maximum payoffs greater than 16 Euro elicited the same distribution of valuations. … The payoffs were as described …. but in Euros rather than in ducats. … The more senior students seemed to have a higher willingness-to-pay. … Offers increase significantly with income. (more)

This isn’t plausibly explained by risk aversion, nor by a general neglect of possibilities with a <5% chance. I suspect this is more about analysis complexity, i.e., about limiting the number of possibilities we’ll consider at any one time. I also suspect this bodes ill for existential risk mitigation.

GD Star Rating
loading...
Tagged as: ,