Tag Archives: Disaster

Kaczynski’s Collapse Theory

Many people argue that we should beware of foreigners, and people from other ethnicities. Beware of visiting them, trading with them, talking to them, or allowing them to move here. The fact that so many people are willing to argue for such conclusions is some evidence in favor of them. But the fact that the arguments offered are so diverse, and so often contradict one another, takes away somewhat from the strength of this evidence. This pattern looks like people tend to have a preconceived conclusion for which they opportunistically embrace any random arguments they can find.

Similarly, many argue that we should be wary of future competition, especially if that might lead to concentrations of power. I recently posted on my undergrad law & econ students’ largely incoherent fears of one group taking over the entire solar system, and how Frederick Engels expresses related fears back in 1844. And I’ve argued on this blog with my ex-co-blogger regarding his concerns that if future AI results from competing teams, one team might explode to suddenly take over the world. In this post I’ll describe Ted “Unabomber” Kaczynski’s rather different theory on why we should fear competition leading to concentration, from his recent book Anti Tech Revolution.

Kaczynski claims that the Fermi paradox, i.e., the fact that the universe looks dead everywhere, is explained by the fact that technological civilizations very reliably destroy themselves. When this destruction happens naturally, it is so thorough that no humans could survive. Which is why his huge priority is to find a way to collapse civilization sooner, so that at least some humans survive. Even a huge nuclear war is preferable, as at least some people survive that.

Why must everything collapse? Because, he says, natural-selection-like competition only works when competing entities have scales of transport and talk that are much less than the scale of the entire system within which they compete. That is, things can work fine when bacteria who each move and talk across only meters compete across an entire planet. The failure of one bacteria doesn’t then threaten the planet. But when competing systems become complex and coupled on global scales, then there are always only a few such systems that matter, and breakdowns often have global scopes.

Kaczynski dismisses the possibility that world-spanning competitors might anticipate the possibility of large correlated disasters, and work to reduce their frequency and mitigate their harms. He says that competitors can’t afford to pay any cost to prepare for infrequent problems, as such costs hurt them in the short run. This seems crazy to me, as most of the large competing systems we know of do in fact pay a lot to prepare for rare disasters. Very few correlated disasters are big enough to threaten to completely destroy the whole world. The world has had global scale correlation for centuries, with the world economy growing enormously over that time. And yet we’ve never even seen a factor of two decline, while at least thirty factors of two would be required for a total collapse. And while it should be easy to test Kaczynski’s claim in small complex systems of competitors, I know of no supporting tests.

Yet all dozen of the reviews I read of Kaczynski’s book found his conclusion here to be obviously correct. Which seems to me evidence that a great many people find the worry about future competitors to be so compelling that they endorse most any vaguely plausible supporting argument. Which I see as weak evidence against that worry.

Yes of course correlated disasters are a concern, even when efforts are made to prepare against them. But its just not remotely obvious that competition makes them worse, or that all civilizations are reliably and completely destroyed by big disasters, so much so that we should prefer to start a big nuclear war now that destroys civilization but leaves a few people alive. Surely if we believed his theory a better solution would be to break the world into a dozen mostly isolated regions.

Kaczynski does deserve credit for avoiding common wishful thinking in some of his other discussion. For example, he says that we can’t much control the trajectory of history, both because it is very hard to coordinate on the largest scales, and because it is hard to estimate the long term consequences of many choices. He sees how hard it is for social movements to actually achieve anything substantial. He notes that futurists who expect to achieve immortality and then live for a thousand years too easily presume that a fast changing competitive world will still have need for them. And while I didn’t see him actually say it, I expect he’s the sort of person who’d make the reasonable argument that individual humans are just happier in a more forager-like world.

Kaczynski isn’t stupid, and he’s more clear-headed than most futurists I read. Too bad his low mood leans him so strongly to embrace a poorly-argued inevitable collapse story.

Some book quotes on his key claim: Continue reading "Kaczynski’s Collapse Theory" »

GD Star Rating
a WordPress rating system
Tagged as:

Dragon Debris?

Apparently the causal path from simple dead matter to an expanding visible civilization is very unlikely. Almost everything that starts along this path is blocked by a great filter, which might be one extremely hard step, or many merely very hard steps. The most likely location of this great filter is that the origin of life is very very hard. Which is good news, because otherwise we’d have to worry at lot about our future, via what fraction of the overall huge filter still lies ahead of us. And if we ever find evidence of life in space that isn’t close to the causal path that led to us, that will be big bad news, and we’ll need to worry a lot more.

One of the more interesting future filter scenarios is a high difficulty of traveling between the stars. As we can easily see across the universe, we know that photons have few problems traveling very long distances. And since stars drift about at great speeds, we know that stars can also travel freely suffering little harm. But we still can’t be sure of the ease of travel for humans, or for the sort of things that our descendants might try to send between the stars. We have collected a few grains of interstellar dust, but still know little about them, and so don’t know how easy was their travel. We do know that most of the universe is made of dark matter and dark energy that we understand quite poorly. So perhaps “Here Be Dragons” lie in wait out there for our scale of interstellar travelers.

Many stars, like ours, are surrounded by a vast cloud of small icy objects. Every once in a while one of these objects falls into a rare orbit where it travels close to its star, and then it becomes a comet with a tail. Even more rarely, one should fall into an orbit that throws it out away from its star (almost always without doing much else to it). Such an object would then travel at the typical star speed between stars, and after billions of years it might perhaps pass near one other star; the chance of two such encounters is very low. And if the space between stars is as mild as it seems, it should arrive looking pretty much as it left.

Astronomers have been waiting for a while to see such an interstellar visitor, and were puzzled to have not yet seen one. They expected it to look like a comet, except traveling a lot faster than do most comets. Well within roughly a year of a new instrument that could see such things better, we’ve finally seen such a visitor in the last few months. It looked like what we expect in some ways. It is traveling at roughly the speed we’d expect, its size is unremarkable, and its color is roughly what we expect from ancient small space objects. But it is suspiciously weird in several other apparently-unrelated ways.

First, its orbit is weird. Its direction of origin is 6 degrees from sun’s motion vector; only one in 365 random directions are closer. And among the travel paths where we could have seen this object, only one in 100 such paths would have traveled closer to the sun than did this one (source: Turner). But one must apparently invoke very strange and unlikely hypotheses to believe these parameters were anything but random. For now, I won’t go there.

Second, the object itself is weird. It does not have a comet tail, and so has apparently lost most of its volatiles like water. If this is typical, it explains why we haven’t seen objects like this before. The object seems to be very elongated, much more than any other natural object we’ve ever seen in our solar system. And it is rotating very fast, so fast that it would fly apart if it were made out of the typical pile of lightly attached rubble. So at some point it experienced an event so dramatic as to melt away its volatiles, melt it into a solid object, stretch it to an extreme, and set it spinning at an extreme rate. After which it drifted for long enough to acquire the usual color of ancient space objects.

This raises the suspicion that it perhaps encountered a dangerous “dragon” between the starts. Making it “dragon debris.” If the timing of this event were random, we should see roughly one a year in the future, and with new better instruments coming online in a few years we should see them even faster. So within a decade we should learn if this first visitor is very unusual, or if we should worry a lot more about travel dangers between the stars.

GD Star Rating
a WordPress rating system
Tagged as: ,

Reply to Christiano on AI Risk

Paul Christiano was one of those who encouraged me to respond to non-foom AI risk concerns. Here I respond to two of his posts he directed me to. The first one says we should worry about the following scenario:

Imagine using [reinforcement learning] to implement a decentralized autonomous organization (DAO) which maximizes its profit. .. to outcompete human organizations at a wide range of tasks — producing and selling cheaper widgets, but also influencing government policy, extorting/manipulating other actors, and so on.

The shareholders of such a DAO may be able to capture the value it creates as long as they are able to retain effective control over its computing hardware / reward signal. Similarly, as long as such DAOs are weak enough to be effectively governed by existing laws and institutions, they are likely to benefit humanity even if they reinvest all of their profits.

But as AI improves, these DAOs would become much more powerful than their human owners or law enforcement. And we have no ready way to use a prosaic AGI to actually represent the shareholder’s interests, or to govern a world dominated by superhuman DAOs. In general, we have no way to use RL to actually interpret and implement human wishes, rather than to optimize some concrete and easily-calculated reward signal. I feel pessimistic about human prospects in such a world. (more)

In a typical non-foom world, if one DAO has advanced abilities, then most other organizations, including government and the law, have similar abilities. So such DAOs shouldn’t find it much easier to evade contracts or regulation than do organizations today. Thus humans can be okay if law and government still respect human property rights or political representation. Sure it might be hard to trust such a DAO to manage your charity, if you don’t trust it to judge who is in most need. But you might trust it much to give you financial returns on your financial investments in it.

Paul Christiano’s second post suggests that the arrival of AI arrives will forever lock in the distribution of patient values at that time:

The distribution of wealth in the world 1000 years ago appears to have had a relatively small effect—or more precisely an unpredictable effect, whose expected value was small ex ante—on the world of today. I think there is a good chance that AI will fundamentally change this dynamic, and that the distribution of resources shortly after the arrival of human-level AI may have very long-lasting consequences. ..

Whichever values were most influential at one time would remain most influential (in expectation) across all future times. .. The great majority of resources are held by extremely patient values. .. The development of machine intelligence may move the world much closer to this naïve model. .. [Because] the values of machine intelligences can (probably, eventually) be directly determined by their owners or predecessors. .. it may simply be possible to design a machine intelligence who exactly shares their predecessor’s values and who can serve as a manager. .. the arrival of machine intelligence may lead to a substantial crystallization of influence .. an event with long-lasting consequences. (more)

That is, Christiano says future AI won’t have problems preserving its values over time, nor need it pay agency costs to manage subsystems. Relatedly, Christiano elsewhere claims that future AI systems won’t have problems with design entrenchment:

Over the next 100 years greatly exceeds total output over all of history. I agree that coordination is hard, but even spending a small fraction of current effort on exploring novel redesigns would be enough to quickly catch up with stuff designed in the past.

A related claim, that Christiano supports to some degree, is that future AI are smart enough to avoid suffers from coordination failures. They may even use “acasual trade” to coordinate when physical interaction of any sort is impossible!

In our world, more competent social and technical systems tend to be larger and more complex, and such systems tend to suffer more (in % cost terms) from issues of design entrenchment, coordination failures, agency costs, and preserving values over time. In larger complex systems, it becomes harder to isolate small parts that encode “values”; a great many diverse parts end up influencing what such systems do in any given situation.

Yet Christiano expects the opposite for future AI; why? I fear his expectations result more from far view idealizations than from observed trends in real systems. In general, we see things far away in less detail, and draw inferences about them more from top level features and analogies than from internal detail. Yet even though we know less about such things, we are more confident in our inferences! The claims above seem to follow from the simple abstract description that future AI is “very smart”, and thus better in every imaginable way. This is reminiscent of medieval analysis that drew so many conclusions about God (including his existence) from the “fact” that he is “perfect.”

But even if values will lock in when AI arrives, and then stay locked, that still doesn’t justify great efforts to study AI control today, at least relative to the other options of improving our control mechanisms in general, or saving resources now to spend later, either on studying AI control problems when we know more about AI, or just to buy influence over the future when that comes up for sale.

GD Star Rating
a WordPress rating system
Tagged as: , , ,

An Outside View of AI Control

I’ve written much on my skepticism of local AI foom (= intelligence explosion). Recently I said that foom offers the main justification I understand for AI risk efforts now, as well as being the main choice of my Twitter followers in a survey. It was the main argument offered by Eliezer Yudkowsky in our debates here at this blog, by Nick Bostrom in his book Superintelligence, and by Max Tegmark in his recent book Life 3.0 (though he denied so in his reply here).

However, some privately complained to me that I haven’t addressed those with non-foom-based AI concerns. So in this post I’ll consider AI control in the context of a prototypical non-em non-foom mostly-peaceful outside-view AI scenario. In a future post, I’ll try to connect this to specific posts by others on AI risk.

An AI scenario is where software does most all jobs; humans may work for fun, but they add little value. In a non-em scenario, ems are never feasible. As foom scenarios are driven by AI innovations that are very lumpy in time and organization, in non-foom scenarios innovation lumpiness is distributed more like it is in our world. In a mostly-peaceful scenario, peaceful technologies of production matter much more than do technologies of war and theft. And as an outside view guesses that future events are like similar past events, I’ll relate future AI control problems to similar past problems. Continue reading "An Outside View of AI Control" »

GD Star Rating
a WordPress rating system
Tagged as: , ,

Prepare for Nuclear Winter

If a 1km asteroid were to hit the Earth, the dust it kicked up would block most sunlight over most of the world for 3 to 10 years. There’s only a one in a million chance of that happening per year, however. Whew. However, there’s a ten times bigger chance that a super volcano, such as the one hiding under Yellowstone, might explode, for a similar result. And I’d put the chance of a full scale nuclear war at ten to one hundred times larger than that: one in ten thousand to one thousand per year. Over a century, that becomes a one to ten percent chance. Not whew; grimace instead.

There is a substantial chance that a full scale nuclear war would produce a nuclear winter, with a similar effect: sunlight is blocked for 3-10 years or more. Yes, there are good criticisms of the more extreme forecasts, but there’s still a big chance the sun gets blocked in a full scale nuclear war, and there’s even a substantial chance of the same result in a mere regional war, where only 100 nukes explode (the world now has 15,000 nukes).

I’ll summarize this as saying we face roughly a one in 10,000 chance per year of most all sunlight on Earth being blocked for 5 to 10 years. Which accumulates to become a 1% chance per century. This is about as big as your one in 9000 personal chance each year of dying in a car accident, or your one in 7500 chance of dying from poisoining. We treat both of these other risks as nontrivial, and put substantial efforts into reducing and mitigating such risks, as we also do for many much smaller risks, such as dying from guns, fire, drowning, or plane crashes. So this risk of losing sunlight for 5-10 years seems well worth reducing or mitigating, if possible.

Even in the best case, the world has only enough stored food to feed everyone for about a year. If the population then gradually declined due to cannibalism of the living, the population falls in half every month, and we’d all be dead in a few years. To save your family by storing ten years of food, you not only have to spend a huge sum now, you’d have to stay very well hidden or defended. Just not gonna happen.

Yeah, probably a few people live on, and so humanity doesn’t go extinct. But the only realistic chance most of us have of surviving in this scenario is to use our vast industrial and scientific abilities to make food. We actually know of many plausible ways to make more than enough food to feed everyone for ten years, even with no sunlight. And even if big chunks of the world economy are in shambles. But for that to work, we must preserve enough social order to make use of at least the core of key social institutions.

Many people presume that as soon as everyone hears about a big problem like this, all social institutions immediately collapse and everyone retreats to their compound to fight a war of all against all, perhaps organized via local Mad-Max-style warlords. But in places where this happens, everyone dies, or moves to places where something else happens.

Many take this as an opportunity to renew their favorite debate, on the right roles for government in society. But while there are clearly many strong roles for government to play in such a situation, it seems unlikely that government can smoothly step into all of the roles required here. Instead, we need an effective industry, to make food, collect its inputs, allocate its workers, and distribute its products. And we need to prepare enough to allow a smooth transition in a crisis; waiting until after the sunlights goes to try to plan this probably ends badly.

Thus while there are important technical aspects of this problem, the core of the problem is social: how to preserve functioning social institutions in a crisis. So I call to social scientist superheroes: we light the “bat signal”, and call on you to apply your superpowers. How can we keep enough peace to make enough food, so we don’t all starve, if Earth loses sunlight for a decade?

To learn more on making food without sunlight, see ALLFED.

GD Star Rating
a WordPress rating system
Tagged as: ,

MRE Futures, To Not Starve

The Meal, Ready-to-Eat – commonly known as the MRE – is a self-contained, individual field ration in lightweight packaging bought by the United States military for its service members for use in combat or other field conditions where organized food facilities are not available. While MREs should be kept cool, they do not need to be refrigerated. .. MREs have also been distributed to civilians during natural disasters. .. Each meal provides about 1200 Calories. They .. have a minimum shelf life of three years. .. MREs must be able to withstand parachute drops from 380 metres, and non-parachute drops of 30 metres. (more)

Someday, a global crisis, or perhaps a severe regional one, may block 10-100% of the normal food supply for up to several years. This last week I attended a workshop set up by ALLFED, a group exploring new food sources for such situations. It seems that few people need to starve, even if we lose 100% of food for five years! And feeding everyone could go a long way toward keeping such a crisis from escalating into a worse catastrophic or existential risk. But for this to work, the right people, with the means and will to act, need to be aware of the right options at the right time. And early preparation, before a crisis, may go a long way toward making this feasible. How can we make this happen?

In this post I will outline a plan I worked out at this workshop, a plan intended to simultaneously achieve several related goals:

  1. Support deals for food insurance expressed in terms that ordinary people might understand and trust.
  2. Create incentives for food producers, before and during a crisis, to find good local ways to make and deliver food.
  3. Create incentives for researchers to find new food sources, develop working processes, and demonstrate their feasibility.
  4. Share information about the likelihood and severity of food crises in particular times, places, and conditions.

My idea starts with a new kind of MRE, one inspired by but not the same as the familiar military MRE. This new MRE would also be ready to eat without cooking, and also have minimum requirements for calories (after digesting), nutrients, lack of toxins, shelf life, and robustness to shocks. But, and this is key, suppliers would be free to meet these requirements using a wide range of exotic food options, including bacteria, bugs, and rats. (Or more conventional food made in unusual ways, like sugar from corn stalks or cows eating tree leaves.) It is this wide flexibility that could actually make it feasible to feed most everyone in a crisis. MREs might be graded for taste quality, perhaps assigned to three different taste quality levels by credentialed food tasters.

As an individual, you might want access to a source of MREs in a crisis. So you, or your family, firm, club, city, or nation, may want to buy or arrange for insurance which guarantees access to MREs in a crisis. A plausible insurance deal might promise access to so many MREs of a certain quality level per per time period, delivered at standard periodic times to a standard location “near” you. That is, rather than deliver MREs to your door on demand, you might have to show up at a certain more central location once a week or month to pick up your next batch of MREs.

The availability of these MREs might be triggered by a publicly observable event, like a statistical average of ordinary food prices over some area exceeding a threshold. Or, more flexibly, standard MRE insurance might always give one the right to buy, at a pre-declared high price and at standard places and times, a certain number of MREs per time period.  Those who fear not having enough cash to pay this pre-declared MRE price in a crisis might separately arrange for straight financial insurance, which pays cash tied either to a publicly triggered event, or to a market MRE price. Or the two approaches could be combined, so that MRE are available at a standard price during certain public events.

The organizations that offer insurance need ways to ensure customers that they can actually deliver on their promises to offer MREs at the stated times, places, and prices, given relevant public events. In addition, they want to minimize the prices they pay for these supplies of MREs, and encourage suppliers to search for low cost ways to make MREs.

This is where futures markets could help. In a futures market for wheat, people promise to deliver, or to take delivery, of certain quantities of certain types of wheat at particular standard times and places. Those who want to ensure a future supply of wheat against risks of changing prices can buy these futures, and those who grow wheat can ensure a future revenue for their wheat by selling futures. Most traders in futures markets are just speculating, and so arrange to leave the market before they’d have to make or take delivery. But the threat of making or taking delivery disciplines the prices that they pay. Those who fail to make or take delivery as promised face large financial and other penalties.

Analogously, those who offer MRE insurance could use MRE futures markets to ensure an MRE supply, and convince clients that they have ensured a supply. Yes, compared to the terms of the insurance offered by insurance organizations, the futures markets may offer fewer standard times, places, quality levels, and triggering public events. (Though the lab but not field tested tech of combinatorial markets make feasible far more combinations.) Even so, customers might find it easy to believe that, if necessary, an organization that has bought futures for a few standard times and places could actually take delivery of these futures contracts, store the MREs for short periods, and deliver them to the more numerous times and places specified in their insurance deals.

MRE futures markets could also ensure firms who explore innovative ways to make MREs of a demand for their product. By selling futures to deliver MREs at the standard times and places, they might fund their research, development, and production. When it came time to actually deliver MREs, they might make side deals with local insurance organizations to avoid any extra storage and transport costs of actually transferring MREs according to the futures contract details.

To encourage innovation, and to convince everyone that the system actually works, some patron, perhaps a foundation or government, could make a habit of periodically but randomly announcing large buy orders for MRE futures at certain times and places in the near future. They actually take delivery of the MREs, and then auction them off to whomever shows up there then to taste the MREs at a big social event. In this way ordinary people can sometimes hold and taste the MREs, and we can all see that there is a system capable of producing and delivering at least modest quantities on short notice. The firms who supply these MREs will of course have to set up real processes to actually deliver them, and be paid big premiums for their efforts.

These new MREs may not meet current regulatory requirements for food, and it may not be easy to adapt them to meet such requirements. Such requirements should be relaxed in a crisis, via a new crisis regulatory regime. It would be better to set that regime up ahead of time, instead of trying to negotiate it during a crisis. Such a new regulatory regime could be tested during these periodic random big MRE orders. Regulators could test the delivered MREs and only let people eat the ones that pasts their tests. Firms that had passed tests at previous events might be pre-approved for delivering MREs to future events, at least if they didn’t change their product too much. And during a real crisis, such firms could be pre-approved to rapidly increase production and delivery of their product. This offers an added incentive for firms to participate in these tests.

MRE futures markets might also help the world to coordinate expectations about which kinds of food crises might appear when under what circumstances. Special conditional futures contracts could be created, where one only promises to deliver MREs given certain world events or policies. If the event doesn’t happen, you don’t have to deliver. The relative prices of future contracts for different events and policies would reveal speculator expectations about how the chance and severity of food crises depend on such events and policies.

And that’s my big idea. Yes it will cost real resources, and I of course hope we never have to use it in a real crisis. But it seems to me far preferable to most of us starving to death. Far preferable.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Both Plague & War Cut Capital Share?

I just finished reading Walter Scheidel’s The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century, and found myself agreeing with Scheidel against his critics. Scheidel is a historian who says that inequality has mainly risen in history when income increased, making more inequality physically possible, and when scale and complexity increased, creating more and bigger chokepoints (e.g., CEO, king) whose controllers can demand more rents.

Big falls in inequality have mainly come from big collapses, such as big wars, revolutions, plagues, and state collapses, which are usually associated with violence. This suggests that a big inequality fall is unlikely anytime soon, and we shouldn’t wish for it, as it would likely come from vast destruction and violence. All of which I find very plausible.

While usually big wars via mass mobilization didn’t change inequality much, in the mid 1900s such wars seemed to have gone along with a big taste for redistribution and revolution. This happened to a lesser extent in Ancient Greece and Rome, and fits a story wherein more forager-like cultures care more about redistribution, especially when primed by visible mass sacrifice.

I noticed one puzzling pattern, however. Income in the world goes to owners of capital, to owners of labor, and to those who can take without contributing to production. As the rich usually get more of their income from capital, compared to labor, one thing that can cause less inequality is a change that makes capital earn a smaller share of total income. The puzzling pattern I noticed is that even though big plagues and big wars should have opposite affects on the capital share, both of them seem to have cut inequality, and both apparently in part via cutting the capital share of income! Let me explain.

Big plagues cut the number of workers without doing much to capital, while big wars like WWI & WWII destroy a much larger fraction of capital than they do of labor. Which event, big plague or big war, reduces the share that capital earns? The answer depends on whether capital and labor are complements or substitutes. If they are substitutes, then destroying capital should cut the capital share of income. But when they are complements, it is destroying labor that should cut the capital share.

The simple middle position between complements and substitutes is the power law (a.k.a. “Cobb-Douglas”) production function, where output Y = La*K1-a, for Labor L, capital K, and constant a in (0,1). (Partial derivatives set wages w = dY/dL and capital rent r = dY/dK.) In this situation, the capital share of income r*K/(r*K+w*L) = 1-a, and so never changes.

If, for example, labor L falls by a factor of 2, while capital K stays the same, then wages rise by the factor 21-a while rents fall by the factor 2a, with the product of these factors being 2. Compared to this simple middle position, if labor and capital are instead complements, then in this example wages would rise and rents would fall by larger factors. If labor and capital are instead substitutes, the factors would be smaller.

Economic papers based on data over the last century usually find labor and capital to be complements, though there are notable exceptions such as Thomas Pietty’s blockbuster book. That fits with data on the Black Death. In the century from 1330 to 1430, Europe’s population fell roughly in half, wages doubled, and rents fell a lot. In England, wages tripled. Similar behavior is seen in other large ancient plagues – wages rose by a factor of four in Mexico! This looks more like what you’d see with complementarity than with a simple power law.

World War I (WWI) killed about 1% of the world population, while the concurrent 1918 flu killed about 4%. World War II (WWII) killed about 3%. But capital was cut much more. The ratio of private wealth to national income fell by a factor of two world wide, and by even larger factors in the main warring nations (source):
WealthToIncomeNow for the puzzle. If capital and labor were still complements during WWI & WWII, then destroying a lot more capital than labor should have resulted in rents on capital rising by a factor so big that product of the two factors increases the capital share of income. Is that what happened? Consider Japan, where 5% of the population died:

Real [Japanese] farm rents fell by four-fifths between 1941 and 1945, and from 4.4% of national income in the mid 1930s to 0.3% in 1946. .. By September 1945, a quarter of the country’s physical capital stock had been wiped out. Japan lost 80% of its merchant ships, 25% of all buildings, 21% of household furnishings and personal effects, 34% of factory equipment, and 24% of finished products. The number of factories in operations and the size of the workforce they employed nearly halved during the final year of the war. p.121

Gains from capital almost disappeared during the war years: the share of rent and interest income in total national income fell from a sixth in the mid-1930s to only 3% in 1946. In 1938, dividends, interest, and rental income together had accounted for about a third of the income of the top 1%, with the remainder divided between business and employment income. By 1945, the share of capital income had dropped to less than an eighth and that of wages to a tenth; business income was the only significant revenue source left to the (formerly) wealthy. p.122

In 1946, real GNP was 45% lower than it had been in 1937. p.124

The sharp drop in top income shares .. were caused above all by a decline in the return on capital. .. Most of these changes occurred during the war itself. p.128

Consider also France and Germany (which lost 2% & 11% of people in WWII, respectively):

During WWI, .. a third of the French capital stock was destroyed, the share of capital income in national household income fell by a third, and GDP contracted by the same proportion. ..In WWII, .. two-thirds of the capital stock was wiped out. .. real rents fell by 90% between 1913 and 1950. p.147

[German] rentiers lost the most: their share of national income plummeted from 15% to 3% even as entrepreneurs were able to maintain their share .. real national income was a quarter to a third lower in 1923 than it had been in 1913. p.152

Maybe I’m missing something, but I don’t see how this is remotely consistent with labor and capital being complements. Yet complementarity seems a good fit to big ancient plagues and more recent empirical studies. What gives?

GD Star Rating
a WordPress rating system
Tagged as: ,

Stock Vs. Flow War

When our farmer ancestors warred, they often went about as far as they could to apply all available resources to their war efforts. This included converting plowshares into swords, ships into navies, farmers into soldiers, granaries into soldiers on the move, good will into allies, and cash into foreign purchases. When wars went long and badly, such resources were often quite depleted by the end. Yet warring farmers only rarely went extinct. Why?

The distinction between stock and flow is a basic one in engineering and finance. Stocks allow flows. A granary is a stock, and it can produce a flow of grain to eat, but that flow will end if the stock is not sufficiently replenished with every harvest. A person is a stock, which can produce work every week, but to make that last we need to create and train new people. Many kinds of stocks have limits on the flows they can produce. While you might be able to pull grain from a granary as fast as you like, you can only pull one hour of work from a worker per hour.

Natural limits on the flows that our stocks can produce have in the past limited the destructiveness of war. Even when war burned the crops, knocked down stone buildings, and killed most of the people, farmland usually bounced back in a few years, and human and animal populations could grow back in a few generations. Stones were restacked to make new buildings. The key long-term stocks of tech and culture were preserved, allowing for a quick rebuilding of previous professions, towns, and trade routes.

Future technologies are likely to have weaker limits on the conversion of stocks into flows. When we have more fishing boats we can more quickly deplete the stock of fish. Instead of water wheels that must wait for water to come down a stream, we make dams that give us water when we want. When we tap oil wells instead of killing whales for oil, the rate at which we can extract oil grows with the size and number of our wells. Eventually we may tap the sun itself not just by basking in its sunlight, but by uplifting its material and running more intense fusion reactors.

Our stronger abilities to turn stocks into flows can be great in peacetime, but they are problematic in wartime. Yes, the side with stronger abilities gains an advantage in war, but after a fierce war the stocks will be lower. Thus improving technology is making war more destructive, not just by blowing up more with each bomb, but by allowing more resources to be tapped more quickly to support war efforts.

This is another way of saying what I was trying to say in my last post: improving tech can make war more destructive, increasing the risk of extinction via war. When local nature was a key stock, diminishing returns in extracting resources from nature limited how much we could destroy during total war. In contrast, when resources can be extracted as fast and easy as grain from a granary, war is more likely to take nearly all of the resources.

Future civilization should make resources more accessible, not just to extract more kinds of slow flows, but also to extract fast flows more cheaply. While this will make it easier to flexibly use such stocks in peacetime, it also suggests a faster depletion of stocks during total war. Only the stocks that cannot be depleted, like technology and culture, may remain. And once the sun is available as a rapidly depletable resource, it may not take many total wars to deplete it.

This seems to me our most likely future great filter, and thus extinction risk. War becomes increasingly destructive, erasing stocks that are not fully replenished between wars, and often taking us to the edge of a small fragile population that could be further reduced by other disasters. And if the dominant minds and cultures speed up substantially, as I expect, that might speed up the cycle of war, allowing less time to recover between total wars.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Beware General Visible Prey

Charles Stross recently on possible future great filters:

So IO9 ran a piece by George Dvorsky on ways we could wreck the solar system. And then Anders Sandberg responded in depth on the subject of existential risks, asking what conceivable threats have big enough spatial reach to threaten an interplanetary or star-faring civilization. … The implication of an [future great filter] is that it doesn’t specifically work against life, it works against interplanetary colonization. … much as Kessler syndrome could effectively block all access to low Earth orbit as a side-effect of carelessly launching too much space junk. Here are some example scenarios: …

Simplistic warfare: … Today’s boringly old-hat chemical rockets, even in the absence of nuclear warheads, are formidably destructive weapons. … War, or other resource conflicts, within a polity capable of rapid interplanetary or even slow interstellar flight, is a horrible prospect.

Irreducible complexity: I take issue with one of Anders’ assumptions, which is that a multi-planet civilization is … not just … distributed, but it will almost by necessity have fairly self-sufficient habitats that could act as seeds for a new civilization if they survive. … I doubt that we could make a self-sufficient habitat that was capable of maintaining its infrastructure and perpetuating and refreshing its human culture with a population any smaller than high-single-digit millions. … Building robust self-sufficient off-world habitats … is vastly more expensive than building an off-world outpost and shipping rations there, as we do with Antarctica. …

Griefers: … All it takes is one civilization of alien ass-hat griefers who send out just one Von Neumann Probe programmed to replicate, build N-D lasers, and zap any planet showing signs of technological civilization, and the result is a galaxy sterile of interplanetary civilizations until the end of the stelliferous era. (more)

These are indeed scenarios of concern. But I find it hard to see how, by themselves, they could add up to a big future filter. Continue reading "Beware General Visible Prey" »

GD Star Rating
a WordPress rating system
Tagged as: , , ,

One In A Billion?

At CATO Unbound this month, David Brin’s lead essay makes two points:

  1. We probably shouldn’t send messages out to aliens now on purpose, and more surely we shouldn’t let each group decide for themselves if to send.
  2. The lack of visible aliens may be explained in part via a strong tendency of all societies to become “feudal”, with elites “suppressing merit competition and mobility, ensuring that status would be inherited” and resulting in “scientific stagnation.”

In my official response at CATO Unbound, I focus on the first issue, agreeing with Brin, and responding to a common counter-argument, namely that we now yell to aliens far more by accident than on purpose. I ask if we should cut back on accidental yelling, which we now do most loudly via the Arecibo planetary radar. Using the amount we spend on Arecibo yelling to estimate the value we get there, I conclude:

We should cut way back on accidental yelling to aliens, such as via Arecibo radar sending, if continuing at current rates would over the long run bring even a one in a billion chance of alerting aliens to come destroy us. And even if this chance is now below one in a billion, it will rise with time and eventually force us to cut back. So let’s start now to estimate such risks, and adapt our behavior accordingly. (more)

As an aside, I also note:

I’m disturbed to see that a consensus apparently arose among many in this area that aliens must be overwhelmingly friendly. Most conventional social scientists I know would find this view quite implausible; they see most conflict as deeply intractable. Why is this kind-aliens view then so common?

My guess: non-social-scientists have believed modern cultural propaganda claims that our dominant cultures today have a vast moral superiority over most other cultures through history. Our media have long suggested that conflictual behaviors like greed, theft, aggression, revenge, violence, war, destruction of nature, and population growth pressures all result from “backward” mindsets from “backward” cultures.

GD Star Rating
a WordPress rating system
Tagged as: ,