Tag Archives: Disaster

Two Types of Envy

I’ve long puzzled over the fact that most of the concern I hear expressed on inequality is about the smallest of (at least) seven kinds: income inequality between the families of a nation at a time (IIBFNAT). Expressed concern has greatly increased over the last half decade. While most people don’t actually know that much about their income ranking, many seem to be trying hard to inform those who rank low of their low status. Their purpose seems to be to induce envy, to induce political action to increase redistribution. They hope to induce these people to identify more with this low income status, and to organize politically around this shared identity.

Many concerned about IIBFNAT are also eager to remind everyone of and to celebrate historical examples of violent revolution aimed at redistribution (e.g., Les Misérables). The purpose here seems to be to encourage support for redistribution by reminding everyone of the possibility of violent revolution. They remind the poor that they could consider revolting, and remind everyone else that a revolt might happen. This strengthens an implicit threat of violence should redistribution be insufficient.

Now consider this recent news:

Shortly before the [recent Toronoto van] attack, a post appeared on the suspect’s Facebook profile, hailing the commencement of the “Incel Rebellion”. …There is a reluctance to ascribe to the “incel” movement anything so lofty as an “ideology” or credit it with any developed, connected thinking, partly because it is so bizarre in conception. … Standing for “involuntarily celibate”,… it [has] mutate[d] into a Reddit muster point for violent misogyny. …

It is quite distinctive in its hate figures: Stacys (attractive women); Chads (attractive men); and Normies (people who aren’t incels, i.e. can find partners but aren’t necessarily attractive). Basically, incels cannot get laid and they violently loathe anyone who can. Some of the fault, in their eyes, is with attractive men who have sex with too many women. …

Incels obsess over their own unattractiveness – dividing the world into alphas and betas, with betas just your average, frustrated idiot dude, and omegas, as the incels often call themselves, the lowest of the low, scorned by everyone – they then use that self-acceptance as an insulation.

Basically, their virginity is a discrimination or apartheid issue, and only a state-distributed girlfriend programme, outlawing multiple partners, can rectify this grand injustice. … Elliot Rodger, the Isla Vista killer, uploaded a video to YouTube about his “retribution” against attractive women who wouldn’t sleep with him (and the attractive men they would sleep with) before killing six people in 2014.  (more)

One might plausibly argue that those with much less access to sex suffer to a similar degree as those with low income, and might similarly hope to gain from organizing around this identity, to lobby for redistribution along this axis and to at least implicitly threaten violence if their demands are not met. As with income inequality, most folks concerned about sex inequality might explicitly reject violence as a method, at least for now, and yet still be encouraged privately when the possibility of violence helps move others to support their policies. (Sex could be directly redistributed, or cash might be redistributed in compensation.)

Strikingly, there seems to be little overlap between those who express concern about income and sex inequality. Among our cultural elites, the first concern is high status, and the later concern low status. For example, the article above seems not at all sympathetic to sex inequality concerns.

Added 27Apr: Though the news article I cite focuses on male complaints, my comments here are about sex inequality in general, applied to both men and women. Not that I see anything particular wrong with focusing on men sometimes. Let me also clarify that personally I’m not very attracted to non-insurance-based redistribution policies of any sort, though I do like to study what causes others to be so attracted.

Added 10p: 27Apr: A tweet on this post induced a lot of discussion on twitter, much of which accuses me of advocating enslaving and raping women. Apparently many people can’t imagine any other way to reduce or moderate sex inequality. (“Redistribute” literally means “change the distribution.”)  In the post I mentioned cash compensation; more cash can make people more attractive and better able to afford legalized prostitution. Others have mentioned promoting monogamy and discouraging promiscuity. Surely there are dozens of other possibilities; sex choices are influenced by a great many factors and each such factor offers a possible lever for influencing sex inequality. Rape and slavery are far from the only possible levers!

Many people are also under the impression that we redistribute income mainly because recipients would die without such redistribution. In rich nations this can account for only a tiny fraction of redistribution. Others say it is obvious that redistribution is only appropriate for commodities, and sex isn’t a commodity. But we take from the rich even when their wealth is in the form of far-from-commodity unique art works, buildings, etc.

Also, it should be obvious that “sex” here refers to a complex package that is desired, which in individual cases may or may not be satisfied by sexbots or prostitutes. But whatever it is the package that people want, we can and should ask how we might get more of it to them.

Finally, many people seem to be reacting primarily to some impression they’ve gained that self-identified “incels” are mostly stupid rude obnoxious arrogant clueless smelly people. I don’t know if that’s true and I don’t care; I’m focused on the issue that they help raise, not their personal or moral worth.

GD Star Rating
loading...
Tagged as: , ,

How Deviant Recent AI Progress Lumpiness?

I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

I emphasize how all the mighty human edifice of Go knowledge … was entirely discarded by AlphaGo Zero with a subsequent performance improvement. … Sheer speed of capability gain should also be highlighted here. … you don’t even need self-improvement to get things that look like FOOM. … the situation with AlphaGo Zero looks nothing like the Hansonian hypothesis and a heck of a lot more like the Yudkowskian one.

I replied that, just as seeing an unusually large terror attack like 9-11 shouldn’t much change your estimate of the overall distribution of terror attacks, nor seeing one big earthquake change your estimate of the overall distribution of earthquakes, seeing one big AI research gain like AlphaGo Zero shouldn’t much change your estimate of the overall distribution of AI progress. (Seeing two big lumps in a row, however, would be stronger evidence.) In his recent podcast with Sam Harris, Eliezer said:

Y: I have claimed recently on facebook that now that we have seen Alpha Zero, Alpha Zero seems like strong evidence against Hanson’s thesis for how these things necessarily go very slow because they have to duplicate all the work done by human civilization and that’s hard. …

H: What’s the best version of his argument, and then why is he wrong?

Y: Nothing can prepare you for Robin Hanson! Ha ha ha. Well, the argument that Robin Hanson has given is that these systems are still immature and narrow, and things will change when they get general. And my reply has been something like, okay, what changes your mind short of the world actually ending. If your theory is wrong do we get to find out about that at all before the world does.

(Sam didn’t raise the subject in his recent podcast with me.)

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

The citation distributions of papers published in the same discipline and year lie on the same curve for most disciplines, if the raw number of citations c of each paper is divided by the average number of citations c0 over all papers in that discipline and year. The dashed line is a lognormal fit. …

The probability of citing a paper grows with the number of citations that it has already collected. Such a model can be augmented with … decreasing the citation probability with the age of the paper, and a fitness parameter, unique to each paper, capturing the appeal of the work to the scientific community. Only a tiny fraction of papers deviate from the pattern described by such a model.

It seems to me quite reasonable to expect that fields where real research progress is lumpier would also display a lumpier distribution of citations. So if CS, AI, or ML research is much lumpier than in other areas, we should expect to see that in citation data. Even if your hypothesis is that only ML research is lumpier, and only in the last 5 years, we should still have enough citation data to see that. My expectation, of course, is that recent ML citation lumpiness is not much bigger than in most research fields through history.

Added 24Mar: You might save the hypothesis that research areas vary greatly in lumpiness by postulating that the number of citations of each research advance goes as the rank of the “size” of that advance, relative to its research area. The distribution of ranks is always the same, after all. But this would be a surprising outcome, and hence seems unlikely; I’d want to see clear evidence that the distribution of lumpiness of advances varies greatly across fields.

Added 27Mar: More directly relevant might be data on distributions of patent value and citations. Do these distributions vary by topic? Are CS/AI/ML distributed more unequally?

GD Star Rating
loading...
Tagged as: , ,

Growth Is Change. So Is Death.

The very readable book The Wizard and the Prophet tells the story of environmental prophet William Vogt investigating the apocalypse-level deaths of guano-making birds near Peru. When he discovered the cause in the El Nino weather cycle, his policy recommendations were to do nothing to mitigate this natural cause; he instead railed against many much smaller human influences, demanding their reversal. A few years later his classic 1948 screed Road To Survival, which contained pretty much all the standard environmental advice and concepts used today, continued to warn against any but small human-caused changes to the environment, while remaining largely indifferent to even huge natural changes.

I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

But of course few are very good at resolving their near versus far incoherences. And so the positions people take end up depending a lot on how they first framed the key issues, as in terms of short or long term changes.

GD Star Rating
loading...
Tagged as: , ,

Kaczynski’s Collapse Theory

Many people argue that we should beware of foreigners, and people from other ethnicities. Beware of visiting them, trading with them, talking to them, or allowing them to move here. The fact that so many people are willing to argue for such conclusions is some evidence in favor of them. But the fact that the arguments offered are so diverse, and so often contradict one another, takes away somewhat from the strength of this evidence. This pattern looks like people tend to have a preconceived conclusion for which they opportunistically embrace any random arguments they can find.

Similarly, many argue that we should be wary of future competition, especially if that might lead to concentrations of power. I recently posted on my undergrad law & econ students’ largely incoherent fears of one group taking over the entire solar system, and how Frederick Engels expresses related fears back in 1844. And I’ve argued on this blog with my ex-co-blogger regarding his concerns that if future AI results from competing teams, one team might explode to suddenly take over the world. In this post I’ll describe Ted “Unabomber” Kaczynski’s rather different theory on why we should fear competition leading to concentration, from his recent book Anti Tech Revolution.

Kaczynski claims that the Fermi paradox, i.e., the fact that the universe looks dead everywhere, is explained by the fact that technological civilizations very reliably destroy themselves. When this destruction happens naturally, it is so thorough that no humans could survive. Which is why his huge priority is to find a way to collapse civilization sooner, so that at least some humans survive. Even a huge nuclear war is preferable, as at least some people survive that.

Why must everything collapse? Because, he says, natural-selection-like competition only works when competing entities have scales of transport and talk that are much less than the scale of the entire system within which they compete. That is, things can work fine when bacteria who each move and talk across only meters compete across an entire planet. The failure of one bacteria doesn’t then threaten the planet. But when competing systems become complex and coupled on global scales, then there are always only a few such systems that matter, and breakdowns often have global scopes.

Kaczynski dismisses the possibility that world-spanning competitors might anticipate the possibility of large correlated disasters, and work to reduce their frequency and mitigate their harms. He says that competitors can’t afford to pay any cost to prepare for infrequent problems, as such costs hurt them in the short run. This seems crazy to me, as most of the large competing systems we know of do in fact pay a lot to prepare for rare disasters. Very few correlated disasters are big enough to threaten to completely destroy the whole world. The world has had global scale correlation for centuries, with the world economy growing enormously over that time. And yet we’ve never even seen a factor of two decline, while at least thirty factors of two would be required for a total collapse. And while it should be easy to test Kaczynski’s claim in small complex systems of competitors, I know of no supporting tests.

Yet all dozen of the reviews I read of Kaczynski’s book found his conclusion here to be obviously correct. Which seems to me evidence that a great many people find the worry about future competitors to be so compelling that they endorse most any vaguely plausible supporting argument. Which I see as weak evidence against that worry.

Yes of course correlated disasters are a concern, even when efforts are made to prepare against them. But its just not remotely obvious that competition makes them worse, or that all civilizations are reliably and completely destroyed by big disasters, so much so that we should prefer to start a big nuclear war now that destroys civilization but leaves a few people alive. Surely if we believed his theory a better solution would be to break the world into a dozen mostly isolated regions.

Kaczynski does deserve credit for avoiding common wishful thinking in some of his other discussion. For example, he says that we can’t much control the trajectory of history, both because it is very hard to coordinate on the largest scales, and because it is hard to estimate the long term consequences of many choices. He sees how hard it is for social movements to actually achieve anything substantial. He notes that futurists who expect to achieve immortality and then live for a thousand years too easily presume that a fast changing competitive world will still have need for them. And while I didn’t see him actually say it, I expect he’s the sort of person who’d make the reasonable argument that individual humans are just happier in a more forager-like world.

Kaczynski isn’t stupid, and he’s more clear-headed than most futurists I read. Too bad his low mood leans him so strongly to embrace a poorly-argued inevitable collapse story.

Some book quotes on his key claim: Continue reading "Kaczynski’s Collapse Theory" »

GD Star Rating
loading...
Tagged as:

Dragon Debris?

Apparently the causal path from simple dead matter to an expanding visible civilization is very unlikely. Almost everything that starts along this path is blocked by a great filter, which might be one extremely hard step, or many merely very hard steps. The most likely location of this great filter is that the origin of life is very very hard. Which is good news, because otherwise we’d have to worry at lot about our future, via what fraction of the overall huge filter still lies ahead of us. And if we ever find evidence of life in space that isn’t close to the causal path that led to us, that will be big bad news, and we’ll need to worry a lot more.

One of the more interesting future filter scenarios is a high difficulty of traveling between the stars. As we can easily see across the universe, we know that photons have few problems traveling very long distances. And since stars drift about at great speeds, we know that stars can also travel freely suffering little harm. But we still can’t be sure of the ease of travel for humans, or for the sort of things that our descendants might try to send between the stars. We have collected a few grains of interstellar dust, but still know little about them, and so don’t know how easy was their travel. We do know that most of the universe is made of dark matter and dark energy that we understand quite poorly. So perhaps “Here Be Dragons” lie in wait out there for our scale of interstellar travelers.

Many stars, like ours, are surrounded by a vast cloud of small icy objects. Every once in a while one of these objects falls into a rare orbit where it travels close to its star, and then it becomes a comet with a tail. Even more rarely, one should fall into an orbit that throws it out away from its star (almost always without doing much else to it). Such an object would then travel at the typical star speed between stars, and after billions of years it might perhaps pass near one other star; the chance of two such encounters is very low. And if the space between stars is as mild as it seems, it should arrive looking pretty much as it left.

Astronomers have been waiting for a while to see such an interstellar visitor, and were puzzled to have not yet seen one. They expected it to look like a comet, except traveling a lot faster than do most comets. Well within roughly a year of a new instrument that could see such things better, we’ve finally seen such a visitor in the last few months. It looked like what we expect in some ways. It is traveling at roughly the speed we’d expect, its size is unremarkable, and its color is roughly what we expect from ancient small space objects. But it is suspiciously weird in several other apparently-unrelated ways.

First, its orbit is weird. Its direction of origin is 6 degrees from sun’s motion vector; only one in 365 random directions are closer. And among the travel paths where we could have seen this object, only one in 100 such paths would have traveled closer to the sun than did this one (source: Turner). But one must apparently invoke very strange and unlikely hypotheses to believe these parameters were anything but random. For now, I won’t go there.

Second, the object itself is weird. It does not have a comet tail, and so has apparently lost most of its volatiles like water. If this is typical, it explains why we haven’t seen objects like this before. The object seems to be very elongated, much more than any other natural object we’ve ever seen in our solar system. And it is rotating very fast, so fast that it would fly apart if it were made out of the typical pile of lightly attached rubble. So at some point it experienced an event so dramatic as to melt away its volatiles, melt it into a solid object, stretch it to an extreme, and set it spinning at an extreme rate. After which it drifted for long enough to acquire the usual color of ancient space objects.

This raises the suspicion that it perhaps encountered a dangerous “dragon” between the starts. Making it “dragon debris.” If the timing of this event were random, we should see roughly one a year in the future, and with new better instruments coming online in a few years we should see them even faster. So within a decade we should learn if this first visitor is very unusual, or if we should worry a lot more about travel dangers between the stars.

GD Star Rating
loading...
Tagged as: ,

Reply to Christiano on AI Risk

Paul Christiano was one of those who encouraged me to respond to non-foom AI risk concerns. Here I respond to two of his posts he directed me to. The first one says we should worry about the following scenario:

Imagine using [reinforcement learning] to implement a decentralized autonomous organization (DAO) which maximizes its profit. .. to outcompete human organizations at a wide range of tasks — producing and selling cheaper widgets, but also influencing government policy, extorting/manipulating other actors, and so on.

The shareholders of such a DAO may be able to capture the value it creates as long as they are able to retain effective control over its computing hardware / reward signal. Similarly, as long as such DAOs are weak enough to be effectively governed by existing laws and institutions, they are likely to benefit humanity even if they reinvest all of their profits.

But as AI improves, these DAOs would become much more powerful than their human owners or law enforcement. And we have no ready way to use a prosaic AGI to actually represent the shareholder’s interests, or to govern a world dominated by superhuman DAOs. In general, we have no way to use RL to actually interpret and implement human wishes, rather than to optimize some concrete and easily-calculated reward signal. I feel pessimistic about human prospects in such a world. (more)

In a typical non-foom world, if one DAO has advanced abilities, then most other organizations, including government and the law, have similar abilities. So such DAOs shouldn’t find it much easier to evade contracts or regulation than do organizations today. Thus humans can be okay if law and government still respect human property rights or political representation. Sure it might be hard to trust such a DAO to manage your charity, if you don’t trust it to judge who is in most need. But you might trust it much to give you financial returns on your financial investments in it.

Paul Christiano’s second post suggests that the arrival of AI arrives will forever lock in the distribution of patient values at that time:

The distribution of wealth in the world 1000 years ago appears to have had a relatively small effect—or more precisely an unpredictable effect, whose expected value was small ex ante—on the world of today. I think there is a good chance that AI will fundamentally change this dynamic, and that the distribution of resources shortly after the arrival of human-level AI may have very long-lasting consequences. ..

Whichever values were most influential at one time would remain most influential (in expectation) across all future times. .. The great majority of resources are held by extremely patient values. .. The development of machine intelligence may move the world much closer to this naïve model. .. [Because] the values of machine intelligences can (probably, eventually) be directly determined by their owners or predecessors. .. it may simply be possible to design a machine intelligence who exactly shares their predecessor’s values and who can serve as a manager. .. the arrival of machine intelligence may lead to a substantial crystallization of influence .. an event with long-lasting consequences. (more)

That is, Christiano says future AI won’t have problems preserving its values over time, nor need it pay agency costs to manage subsystems. Relatedly, Christiano elsewhere claims that future AI systems won’t have problems with design entrenchment:

Over the next 100 years greatly exceeds total output over all of history. I agree that coordination is hard, but even spending a small fraction of current effort on exploring novel redesigns would be enough to quickly catch up with stuff designed in the past.

A related claim, that Christiano supports to some degree, is that future AI are smart enough to avoid suffers from coordination failures. They may even use “acasual trade” to coordinate when physical interaction of any sort is impossible!

In our world, more competent social and technical systems tend to be larger and more complex, and such systems tend to suffer more (in % cost terms) from issues of design entrenchment, coordination failures, agency costs, and preserving values over time. In larger complex systems, it becomes harder to isolate small parts that encode “values”; a great many diverse parts end up influencing what such systems do in any given situation.

Yet Christiano expects the opposite for future AI; why? I fear his expectations result more from far view idealizations than from observed trends in real systems. In general, we see things far away in less detail, and draw inferences about them more from top level features and analogies than from internal detail. Yet even though we know less about such things, we are more confident in our inferences! The claims above seem to follow from the simple abstract description that future AI is “very smart”, and thus better in every imaginable way. This is reminiscent of medieval analysis that drew so many conclusions about God (including his existence) from the “fact” that he is “perfect.”

But even if values will lock in when AI arrives, and then stay locked, that still doesn’t justify great efforts to study AI control today, at least relative to the other options of improving our control mechanisms in general, or saving resources now to spend later, either on studying AI control problems when we know more about AI, or just to buy influence over the future when that comes up for sale.

GD Star Rating
loading...
Tagged as: , , ,

An Outside View of AI Control

I’ve written much on my skepticism of local AI foom (= intelligence explosion). Recently I said that foom offers the main justification I understand for AI risk efforts now, as well as being the main choice of my Twitter followers in a survey. It was the main argument offered by Eliezer Yudkowsky in our debates here at this blog, by Nick Bostrom in his book Superintelligence, and by Max Tegmark in his recent book Life 3.0 (though he denied so in his reply here).

However, some privately complained to me that I haven’t addressed those with non-foom-based AI concerns. So in this post I’ll consider AI control in the context of a prototypical non-em non-foom mostly-peaceful outside-view AI scenario. In a future post, I’ll try to connect this to specific posts by others on AI risk.

An AI scenario is where software does most all jobs; humans may work for fun, but they add little value. In a non-em scenario, ems are never feasible. As foom scenarios are driven by AI innovations that are very lumpy in time and organization, in non-foom scenarios innovation lumpiness is distributed more like it is in our world. In a mostly-peaceful scenario, peaceful technologies of production matter much more than do technologies of war and theft. And as an outside view guesses that future events are like similar past events, I’ll relate future AI control problems to similar past problems. Continue reading "An Outside View of AI Control" »

GD Star Rating
loading...
Tagged as: , ,

Prepare for Nuclear Winter

If a 1km asteroid were to hit the Earth, the dust it kicked up would block most sunlight over most of the world for 3 to 10 years. There’s only a one in a million chance of that happening per year, however. Whew. However, there’s a ten times bigger chance that a super volcano, such as the one hiding under Yellowstone, might explode, for a similar result. And I’d put the chance of a full scale nuclear war at ten to one hundred times larger than that: one in ten thousand to one thousand per year. Over a century, that becomes a one to ten percent chance. Not whew; grimace instead.

There is a substantial chance that a full scale nuclear war would produce a nuclear winter, with a similar effect: sunlight is blocked for 3-10 years or more. Yes, there are good criticisms of the more extreme forecasts, but there’s still a big chance the sun gets blocked in a full scale nuclear war, and there’s even a substantial chance of the same result in a mere regional war, where only 100 nukes explode (the world now has 15,000 nukes).

I’ll summarize this as saying we face roughly a one in 10,000 chance per year of most all sunlight on Earth being blocked for 5 to 10 years. Which accumulates to become a 1% chance per century. This is about as big as your one in 9000 personal chance each year of dying in a car accident, or your one in 7500 chance of dying from poisoining. We treat both of these other risks as nontrivial, and put substantial efforts into reducing and mitigating such risks, as we also do for many much smaller risks, such as dying from guns, fire, drowning, or plane crashes. So this risk of losing sunlight for 5-10 years seems well worth reducing or mitigating, if possible.

Even in the best case, the world has only enough stored food to feed everyone for about a year. If the population then gradually declined due to cannibalism of the living, the population falls in half every month, and we’d all be dead in a few years. To save your family by storing ten years of food, you not only have to spend a huge sum now, you’d have to stay very well hidden or defended. Just not gonna happen.

Yeah, probably a few people live on, and so humanity doesn’t go extinct. But the only realistic chance most of us have of surviving in this scenario is to use our vast industrial and scientific abilities to make food. We actually know of many plausible ways to make more than enough food to feed everyone for ten years, even with no sunlight. And even if big chunks of the world economy are in shambles. But for that to work, we must preserve enough social order to make use of at least the core of key social institutions.

Many people presume that as soon as everyone hears about a big problem like this, all social institutions immediately collapse and everyone retreats to their compound to fight a war of all against all, perhaps organized via local Mad-Max-style warlords. But in places where this happens, everyone dies, or moves to places where something else happens.

Many take this as an opportunity to renew their favorite debate, on the right roles for government in society. But while there are clearly many strong roles for government to play in such a situation, it seems unlikely that government can smoothly step into all of the roles required here. Instead, we need an effective industry, to make food, collect its inputs, allocate its workers, and distribute its products. And we need to prepare enough to allow a smooth transition in a crisis; waiting until after the sunlights goes to try to plan this probably ends badly.

Thus while there are important technical aspects of this problem, the core of the problem is social: how to preserve functioning social institutions in a crisis. So I call to social scientist superheroes: we light the “bat signal”, and call on you to apply your superpowers. How can we keep enough peace to make enough food, so we don’t all starve, if Earth loses sunlight for a decade?

To learn more on making food without sunlight, see ALLFED.

GD Star Rating
loading...
Tagged as: ,

MRE Futures, To Not Starve

The Meal, Ready-to-Eat – commonly known as the MRE – is a self-contained, individual field ration in lightweight packaging bought by the United States military for its service members for use in combat or other field conditions where organized food facilities are not available. While MREs should be kept cool, they do not need to be refrigerated. .. MREs have also been distributed to civilians during natural disasters. .. Each meal provides about 1200 Calories. They .. have a minimum shelf life of three years. .. MREs must be able to withstand parachute drops from 380 metres, and non-parachute drops of 30 metres. (more)

Someday, a global crisis, or perhaps a severe regional one, may block 10-100% of the normal food supply for up to several years. This last week I attended a workshop set up by ALLFED, a group exploring new food sources for such situations. It seems that few people need to starve, even if we lose 100% of food for five years! And feeding everyone could go a long way toward keeping such a crisis from escalating into a worse catastrophic or existential risk. But for this to work, the right people, with the means and will to act, need to be aware of the right options at the right time. And early preparation, before a crisis, may go a long way toward making this feasible. How can we make this happen?

In this post I will outline a plan I worked out at this workshop, a plan intended to simultaneously achieve several related goals:

  1. Support deals for food insurance expressed in terms that ordinary people might understand and trust.
  2. Create incentives for food producers, before and during a crisis, to find good local ways to make and deliver food.
  3. Create incentives for researchers to find new food sources, develop working processes, and demonstrate their feasibility.
  4. Share information about the likelihood and severity of food crises in particular times, places, and conditions.

My idea starts with a new kind of MRE, one inspired by but not the same as the familiar military MRE. This new MRE would also be ready to eat without cooking, and also have minimum requirements for calories (after digesting), nutrients, lack of toxins, shelf life, and robustness to shocks. But, and this is key, suppliers would be free to meet these requirements using a wide range of exotic food options, including bacteria, bugs, and rats. (Or more conventional food made in unusual ways, like sugar from corn stalks or cows eating tree leaves.) It is this wide flexibility that could actually make it feasible to feed most everyone in a crisis. MREs might be graded for taste quality, perhaps assigned to three different taste quality levels by credentialed food tasters.

As an individual, you might want access to a source of MREs in a crisis. So you, or your family, firm, club, city, or nation, may want to buy or arrange for insurance which guarantees access to MREs in a crisis. A plausible insurance deal might promise access to so many MREs of a certain quality level per per time period, delivered at standard periodic times to a standard location “near” you. That is, rather than deliver MREs to your door on demand, you might have to show up at a certain more central location once a week or month to pick up your next batch of MREs.

The availability of these MREs might be triggered by a publicly observable event, like a statistical average of ordinary food prices over some area exceeding a threshold. Or, more flexibly, standard MRE insurance might always give one the right to buy, at a pre-declared high price and at standard places and times, a certain number of MREs per time period.  Those who fear not having enough cash to pay this pre-declared MRE price in a crisis might separately arrange for straight financial insurance, which pays cash tied either to a publicly triggered event, or to a market MRE price. Or the two approaches could be combined, so that MRE are available at a standard price during certain public events.

The organizations that offer insurance need ways to ensure customers that they can actually deliver on their promises to offer MREs at the stated times, places, and prices, given relevant public events. In addition, they want to minimize the prices they pay for these supplies of MREs, and encourage suppliers to search for low cost ways to make MREs.

This is where futures markets could help. In a futures market for wheat, people promise to deliver, or to take delivery, of certain quantities of certain types of wheat at particular standard times and places. Those who want to ensure a future supply of wheat against risks of changing prices can buy these futures, and those who grow wheat can ensure a future revenue for their wheat by selling futures. Most traders in futures markets are just speculating, and so arrange to leave the market before they’d have to make or take delivery. But the threat of making or taking delivery disciplines the prices that they pay. Those who fail to make or take delivery as promised face large financial and other penalties.

Analogously, those who offer MRE insurance could use MRE futures markets to ensure an MRE supply, and convince clients that they have ensured a supply. Yes, compared to the terms of the insurance offered by insurance organizations, the futures markets may offer fewer standard times, places, quality levels, and triggering public events. (Though the lab but not field tested tech of combinatorial markets make feasible far more combinations.) Even so, customers might find it easy to believe that, if necessary, an organization that has bought futures for a few standard times and places could actually take delivery of these futures contracts, store the MREs for short periods, and deliver them to the more numerous times and places specified in their insurance deals.

MRE futures markets could also ensure firms who explore innovative ways to make MREs of a demand for their product. By selling futures to deliver MREs at the standard times and places, they might fund their research, development, and production. When it came time to actually deliver MREs, they might make side deals with local insurance organizations to avoid any extra storage and transport costs of actually transferring MREs according to the futures contract details.

To encourage innovation, and to convince everyone that the system actually works, some patron, perhaps a foundation or government, could make a habit of periodically but randomly announcing large buy orders for MRE futures at certain times and places in the near future. They actually take delivery of the MREs, and then auction them off to whomever shows up there then to taste the MREs at a big social event. In this way ordinary people can sometimes hold and taste the MREs, and we can all see that there is a system capable of producing and delivering at least modest quantities on short notice. The firms who supply these MREs will of course have to set up real processes to actually deliver them, and be paid big premiums for their efforts.

These new MREs may not meet current regulatory requirements for food, and it may not be easy to adapt them to meet such requirements. Such requirements should be relaxed in a crisis, via a new crisis regulatory regime. It would be better to set that regime up ahead of time, instead of trying to negotiate it during a crisis. Such a new regulatory regime could be tested during these periodic random big MRE orders. Regulators could test the delivered MREs and only let people eat the ones that pasts their tests. Firms that had passed tests at previous events might be pre-approved for delivering MREs to future events, at least if they didn’t change their product too much. And during a real crisis, such firms could be pre-approved to rapidly increase production and delivery of their product. This offers an added incentive for firms to participate in these tests.

MRE futures markets might also help the world to coordinate expectations about which kinds of food crises might appear when under what circumstances. Special conditional futures contracts could be created, where one only promises to deliver MREs given certain world events or policies. If the event doesn’t happen, you don’t have to deliver. The relative prices of future contracts for different events and policies would reveal speculator expectations about how the chance and severity of food crises depend on such events and policies.

And that’s my big idea. Yes it will cost real resources, and I of course hope we never have to use it in a real crisis. But it seems to me far preferable to most of us starving to death. Far preferable.

GD Star Rating
loading...
Tagged as: , ,

Both Plague & War Cut Capital Share?

I just finished reading Walter Scheidel’s The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century, and found myself agreeing with Scheidel against his critics. Scheidel is a historian who says that inequality has mainly risen in history when income increased, making more inequality physically possible, and when scale and complexity increased, creating more and bigger chokepoints (e.g., CEO, king) whose controllers can demand more rents.

Big falls in inequality have mainly come from big collapses, such as big wars, revolutions, plagues, and state collapses, which are usually associated with violence. This suggests that a big inequality fall is unlikely anytime soon, and we shouldn’t wish for it, as it would likely come from vast destruction and violence. All of which I find very plausible.

While usually big wars via mass mobilization didn’t change inequality much, in the mid 1900s such wars seemed to have gone along with a big taste for redistribution and revolution. This happened to a lesser extent in Ancient Greece and Rome, and fits a story wherein more forager-like cultures care more about redistribution, especially when primed by visible mass sacrifice.

I noticed one puzzling pattern, however. Income in the world goes to owners of capital, to owners of labor, and to those who can take without contributing to production. As the rich usually get more of their income from capital, compared to labor, one thing that can cause less inequality is a change that makes capital earn a smaller share of total income. The puzzling pattern I noticed is that even though big plagues and big wars should have opposite affects on the capital share, both of them seem to have cut inequality, and both apparently in part via cutting the capital share of income! Let me explain.

Big plagues cut the number of workers without doing much to capital, while big wars like WWI & WWII destroy a much larger fraction of capital than they do of labor. Which event, big plague or big war, reduces the share that capital earns? The answer depends on whether capital and labor are complements or substitutes. If they are substitutes, then destroying capital should cut the capital share of income. But when they are complements, it is destroying labor that should cut the capital share.

The simple middle position between complements and substitutes is the power law (a.k.a. “Cobb-Douglas”) production function, where output Y = La*K1-a, for Labor L, capital K, and constant a in (0,1). (Partial derivatives set wages w = dY/dL and capital rent r = dY/dK.) In this situation, the capital share of income r*K/(r*K+w*L) = 1-a, and so never changes.

If, for example, labor L falls by a factor of 2, while capital K stays the same, then wages rise by the factor 21-a while rents fall by the factor 2a, with the product of these factors being 2. Compared to this simple middle position, if labor and capital are instead complements, then in this example wages would rise and rents would fall by larger factors. If labor and capital are instead substitutes, the factors would be smaller.

Economic papers based on data over the last century usually find labor and capital to be complements, though there are notable exceptions such as Thomas Pietty’s blockbuster book. That fits with data on the Black Death. In the century from 1330 to 1430, Europe’s population fell roughly in half, wages doubled, and rents fell a lot. In England, wages tripled. Similar behavior is seen in other large ancient plagues – wages rose by a factor of four in Mexico! This looks more like what you’d see with complementarity than with a simple power law.

World War I (WWI) killed about 1% of the world population, while the concurrent 1918 flu killed about 4%. World War II (WWII) killed about 3%. But capital was cut much more. The ratio of private wealth to national income fell by a factor of two world wide, and by even larger factors in the main warring nations (source):
WealthToIncomeNow for the puzzle. If capital and labor were still complements during WWI & WWII, then destroying a lot more capital than labor should have resulted in rents on capital rising by a factor so big that product of the two factors increases the capital share of income. Is that what happened? Consider Japan, where 5% of the population died:

Real [Japanese] farm rents fell by four-fifths between 1941 and 1945, and from 4.4% of national income in the mid 1930s to 0.3% in 1946. .. By September 1945, a quarter of the country’s physical capital stock had been wiped out. Japan lost 80% of its merchant ships, 25% of all buildings, 21% of household furnishings and personal effects, 34% of factory equipment, and 24% of finished products. The number of factories in operations and the size of the workforce they employed nearly halved during the final year of the war. p.121

Gains from capital almost disappeared during the war years: the share of rent and interest income in total national income fell from a sixth in the mid-1930s to only 3% in 1946. In 1938, dividends, interest, and rental income together had accounted for about a third of the income of the top 1%, with the remainder divided between business and employment income. By 1945, the share of capital income had dropped to less than an eighth and that of wages to a tenth; business income was the only significant revenue source left to the (formerly) wealthy. p.122

In 1946, real GNP was 45% lower than it had been in 1937. p.124

The sharp drop in top income shares .. were caused above all by a decline in the return on capital. .. Most of these changes occurred during the war itself. p.128

Consider also France and Germany (which lost 2% & 11% of people in WWII, respectively):

During WWI, .. a third of the French capital stock was destroyed, the share of capital income in national household income fell by a third, and GDP contracted by the same proportion. ..In WWII, .. two-thirds of the capital stock was wiped out. .. real rents fell by 90% between 1913 and 1950. p.147

[German] rentiers lost the most: their share of national income plummeted from 15% to 3% even as entrepreneurs were able to maintain their share .. real national income was a quarter to a third lower in 1923 than it had been in 1913. p.152

Maybe I’m missing something, but I don’t see how this is remotely consistent with labor and capital being complements. Yet complementarity seems a good fit to big ancient plagues and more recent empirical studies. What gives?

GD Star Rating
loading...
Tagged as: ,