Tag Archives: Politics

Open Policy Evaluation

Hypocrisy is a tribute vice pays to virtue. La Rochefoucauld, Maximes

In some areas of life, you need connections to do anything. Invitations to parties, jobs, housing, purchases, business deals, etc. are all gained via private personal connections. In other areas of life, in contrast, invitations are made open to everyone. Posted for all to see are openings for jobs, housing, products to buy, business investment, calls for proposals for contracts and grants, etc. The connection-only world is often suspected of nepotism and corruption, and “reforms” often take the form of requiring openings to be posted so that anyone can apply.

In academia, we post openings for jobs, school attendance, conference attendance, journal publications, and grant applications for all to see. Even though most people know that you’ll actually need personal connections to have much of a chance for many of these things. People seems to want to appear willing to consider an application from anyone. They allow some invitation-only conferences, talk series, etc., but usually insist that such things are incidental, not central to their profession.

This preference for at least an appearance of openness suggests a general strategy of reform: find things that are now only gained via personal connections, and create an alternate open process whereby anyone can officially apply. In this post, I apply this idea to: policy proposals.

Imagine that you have a proposal for a better policy, to be used by governments, businesses, or other organizations. How can you get people to listen to your proposal, and perhaps endorse it or apply it? You might try to use personal connections to get an audience with someone at a government agency, political interest group, think tank, foundation, or business. But that’s stuck in the private connection world. You might wait for an agency or foundation to put out an open call for proposals, seeking a solution to exactly the problem your proposal solves. But for any one proposal idea, you might wait a very long time.

You might submit an article to an open conference or journal, or submit a book to a publisher. But if they accept your submission, that mostly won’t be an endorsement of whether your proposal is good policy by some metric. Publishers are mostly looking at other criteria, such as whether you have an impressive study using difficult methods, or whether you have a book thesis and writing style that will attract many readers.

So I propose that we consider creating an open process for submitting policy proposals to be evaluated, in the hope of gaining some level of endorsement and perhaps further action. This process won’t judge your submission on wit, popularity, impressiveness, or analytical rigor. Their key question is: is this promising as a policy proposal to actually adopt, for the purpose of making a better world? If they endorse your proposal, then other actors can use that as a quality signal regarding what policy proposals to consider.

Of course how you judge a policy proposal depends on your values. So there might be different open policy evaluators (OPE) based on different sets of values. Each OPE needs to have some consistent standards by which they evaluate proposals. For example, economists might ask whether a proposal improves economic efficiency, libertarians might ask if it increases liberty, and progressives might ask whether it reduces inequality.

Should the evaluation of a proposal consider whether there’s a snowball chance in hell of a proposal being actually adopted, or even officially considered? That is, whether it is in the “Overton window”? Should they consider whether you have so far gained sufficient celebrity endorsements to make people pay attention to your proposal? Well, those are choices of evaluation criteria. I’m personally more interested in evaluating proposals regardless of who has supported them, and regardless of their near-term political feasibility. Like how academics say we do today with journal article submissions. But that’s just me.

An OPE seems valid and useful as long as its actual choices of which policies it endorses match its declared evaluation criteria. Then it can serve as a useful filter, between people with innovative policy ideas and policy customers seeking useful ideas to consider and perhaps implement. If you can find OPEs who share your evaluation criteria, you can consider the policies they endorse. And of course if we ever end up having many of them, you could focus first on the most prestigious ones.

Ideally an OPE would have funding from some source to pay for its evaluations. But I could also imagine applicants having to pay a fee to have their proposals considered.

GD Star Rating
loading...
Tagged as: , ,

Moral Choices Reveal Preferences

Tyler Cowen has a new book, Stubborn Attachments. In my next post I’ll engage his book’s main claim. But in this post I’ll take issue with one point that is to him relatively minor, but is to me important: the wisdom of the usual economics focus on preferences:

Sometimes my fellow economists argue that “satisfying people’s preferences” is the only value that matters, because in their view it encapsulates all other relevant values. But that approach doesn’t work. It is not sufficiently pluralistic, as it also matters whether our overall society encompasses standards of justice, beauty, and other values from the plural canon. “What we want” does not suffice to define the good. Furthermore, we must often judge people’s preferences by invoking other values external to those preferences. …

Furthermore, if individuals are poorly informed, confused, or downright inconsistent— as nearly all of us are, at times— the notion of “what we want” isn’t always so clear. So while I am an economist, and I will use a lot of economic arguments, I won’t always side with the normative approach of my discipline, which puts too much emphasis on satisfying preferences at the expense of other ethical values. … We should not end civilization to do what is just, but justice does sometimes trump utility. And justice cannot be reduced to what makes us happy or to what satisfies our preferences. …

iI traditional economics— at least prior to the behavioral revolution and the integration with psychology— it was commonly assumed that what an individual chooses, or would choose, is a good indicator of his or her welfare. But individual preferences do not always reflect individual interests very well. Preferences as expressed in the marketplace often appear irrational, intransitive, spiteful, or otherwise morally dubious, as evidenced by a wide range of vices, from cravings for refined sugar to pornography to grossly actuarially unfair lottery tickets. Given these human imperfections, why should the concept of satisfying preferences be so important? Even if you are willing to rationalize or otherwise defend some of these choices, in many cases it seems obvious that satisfying preferences does not make people happier and does not make the world a better place.

Tyler seems to use a standard moral framework here, one wherein we are looking at others and trying to agree among ourselves about what moral choices to make on their behalf. (Those others are not included in our conversation.) When we look at those other people, we can use the choices that they make to infer their wants (called “revealed preferences”), and then we can then make our moral choices in part to help them get what they want.

In this context, Tyler accurately describes common morality, in the sense that the moral choices of most people do not depend only on what those other object people want. Common moral choices are instead often “paternalistic”, giving people less of what they want in order to achieve other ends and to satisfy other principles. We can argue about how moral such choices actually are, but they clearly embody a common attitude to morality.

However, if these moral choices that we are to agree on satisfy some simple consistency conditions, then formally they imply a set of “revealed preferences”.  (And if they do not actually satisfy these conditions, we can see them as resulting from consistent preferences plus avoidable error.) They are “our” preferences in this moral choice situation. Looked at this way, it is just not remotely true that “ ‘What we want’ does not suffice to define the good” or that “Justice cannot be reduced to … what satisfies our preferences.” Our concepts of the good and justice are in fact exactly described by our moral preferences, the preferences that are revealed by our various consistent moral choices. It is then quite accurate to say that our moral preferences encapsulate all our relevant moral values.

Furthermore, the usual economics framework is wise and insightful because we in fact quite often disagree about moral choices when we take moral action. This framework that Tyler seems to use above, wherein we first agree on which acts are moral and then we act, is based on an often quite unrealistic fiction. We instead commonly each take moral actions in the absence of agreement. In such cases we each have a different set of moral preferences, and must consider how to take moral action in the context of our differing preferences.

At this point the usual economists’ framework, wherein different agents have different preferences, becomes quite directly relevant. It is then useful to think about moral Pareto improvements, wherein we each get more of what we want morally, and moral deals, where we make verifiable agreements to achieve moral “gains from trade”. The usual economist tools for estimating and calculating our wants and the location of win-win improvements then seem quite useful and important.

In this situation, we each seek to influence the resulting set of actual moral choices in order to achieve our differing moral preferences. We might try to achieve this influence via preaching, threats, alliances, wars, or deals; there are many possibilities. But whatever we do, we each want any analytical framework that we use to help us in this process to reflect our actual differing moral preferences. Yes, preferences can be complex, must be inferred from limited data on our choices, and yes we are often “poorly informed, confused, or downright inconsistent.” But we rarely say “why should the concept of satisfying [my moral] preferences be so important?”, and we are not at all indifferent to instead substituting the preferences of some other party, or the choice priorities of some deal analyst or assistant like Tyler. As much as possible, we seek to have the actual moral choices that result reflect our moral preferences, which we see as a very real and relevant thing, encapsulating all our relevant moral values.

And of course we should expect this sort of thing to happen all the more in a more inclusive conversation, one where the people about whom we are making moral choices become part of the moral “dealmaking” process. That is, when it is not we trying to agree among ourselves about what we should do for them, but when instead we all talk together about what to do for us all. In this more political case, we don’t at all say “my preferences are poorly informed, confused, and inconsistent and hardly matter so they don’t deserve much consideration.” Instead we each focus on causing choices that better satisfy our moral preferences, as we understand them. In this case, the usual economist tools and analytical frameworks based on achieving preferences seem quite appropriate. They deserve to sit center stage in our analysis.

GD Star Rating
loading...
Tagged as: , ,

Vulnerable World Hypothesis

I’m a big fan of Nick Bostrom; he is way better than almost all other future analysts I’ve seen. He thinks carefully and writes well. A consistent theme of Bostrom’s over the years has been to point out future problems where more governance could help. His latest paper, The Vulnerable World Hypothesis, fits in this theme:

Consider a counterfactual history in which Szilard invents nuclear fission and realizes that a nuclear bomb could be made with a piece of glass, a metal object, and a battery arranged in a particular configuration. What happens next? … Maybe … ban all research in nuclear physics … [Or] eliminate all glass, metal, or sources of electrical current. … Societies might split into factions waging a civil wars with nuclear weapons, … end only when … nobody is able any longer to put together a bomb … from stored materials or the scrap of city ruins. …

The ​vulnerable world hypothesis​ [VWH] … is that there is some level of technology at which civilization almost certainly gets destroyed unless … civilization sufficiently exits the … world order characterized by … limited capacity for preventive policing​, … limited capacity for global governance.​ … [and] diverse motivations​. … It is ​not​ a primary purpose of this paper to argue VWH is true. …

Four types of civilizational vulnerability. … in the “easy nukes” scenario, it becomes too easy for individuals or small groups to cause mass destruction. … a technology that strongly incentivizes powerful actors to use their powers to cause mass destruction. … counterfactual in which a preemptive counterforce [nuclear] strike is more feasible. … the problem of global warming [could] be far more dire … if the atmosphere had been susceptible to ignition by a nuclear detonation, and if this fact had been relatively easy to overlook …

two possible ways of achieving stabilization: Create the capacity for extremely effective preventive policing.​ … and create the capacity for strong global governance. … While some possible vulnerabilities can be stabilized with preventive policing alone, and some other vulnerabilities can be stabilized with global governance alone, there are some that would require both. …

It goes without saying there are great difficulties, and also very serious potential downsides, in seeking progress towards (a) and (b). In this paper, we will say little about the difficulties and almost nothing about the potential downsides—in part because these are already rather well known and widely appreciated.

I take issue a bit with this last statement. The vast literature on governance shows both many potential advantages of and problems with having more relative to less governance. It is good to try to extend this literature into futuristic considerations, by taking a wider longer term view. But that should include looking for both novel upsides and downsides. It is fine for Bostrom to seek not-yet-appreciated upsides, but we should also seek not-yet-appreciated downsides, such as those I’ve mentioned in two recent posts.

While Bostrom doesn’t in his paper claim that our world is in fact vulnerable, he released his paper at time when many folks in the tech world have been claiming that changing tech is causing our world to in fact become more vulnerable over time to analogies of his “easy nukes” scenario. Such people warn that it is becoming easier for smaller groups and individuals to do more damage to the world via guns, bombs, poison, germs, planes, computer hacking, and financial crashes. And Bostrom’s book Superintelligence can be seen as such a warning. But I’m skeptical, and have yet to see anyone show a data series displaying such a trend for any of these harms.

More generally, I worry that “bad cases make bad law”. Legal experts say it is bad to focus on extreme cases when changing law, and similarly it may go badly to focus on very unlikely but extreme-outcome scenarios when reasoning about future-related policy. It may be very hard to weigh extreme but unlikely scenarios suggesting more governance against extreme but unlikely scenarios suggesting less governance. Perhaps the best lesson is that we should make it a priority to improve governance capacities, so we can better gain upsides without paying downsides. I’ve been working on this for decades.

I also worry that existing governance mechanisms do especially badly with extreme scenarios. The history of how the policy world responded badly to extreme nanotech scenarios is a case worth considering.

Added 8am:

Kevin Kelly in 2012:

The power of an individual to kill others has not increased over time. To restate that: An individual — a person working alone today — can’t kill more people than say someone living 200 or 2,000 years ago.

Anders Sandberg in 2018:

Added 19Nov: Vox quotes from this article.

GD Star Rating
loading...
Tagged as: , ,

World Government Risks Collective Suicide

If your mood changes every month, and if you die in any month where your mood turns to suicide, then to live 83 years you need to have one thousand months in a row where your mood doesn’t turn to suicide. Your ability to do this is aided by the fact that your mind is internally divided; while in many months part of you wants to commit suicide, it is quite rare for a majority coalition of your mind to support such an action.

In the movie Lord of the Rings, Denethor Steward of Gondor is in a suicidal mood when enemies attack the city. If not for the heroics of Gandalf, that mood might have ended his city. In the movie Dr. Strangelove, the crazed General Ripper “believes the Soviets have been using fluoridation of the American water supplies to pollute the `precious bodily fluids’ of Americans” and orders planes to start a nuclear attack, which ends badly. In many mass suicides through history, powerful leaders have been able to make whole communities commit suicide.

In a nuclear MAD situation, a nation can last unbombed only as long as no one who can “push the button” falls into a suicidal mood. Or into one of a thousand other moods that in effect lead to misjudgments and refusals to listen to reason, that eventually leads to suicide. This is a serious problem for any nuclear nation that wants to live long relative to number of people who can push the button, times the timescale on which moods change. When there are powers large enough that their suicide could take down civilization, then the risk of power suicide becomes a risk of civilization suicide. Even if the risk is low in any one year, over the long run this becomes a serious risk.

This is a big problem for world or universal government. We today coordinate on the scale of firms, cities, nations, and internationals organizations. However, the fact that we also fail to coordinate to deal with many large problems on these scales shows that we face severe limits in our coordination abilities. We also face many problems that could be aided by coordination via world government, and future civilizations will be similarly tempted by the coordination powers of central governments.

But, alas, central power risks central suicide, either done directly on purpose or as an indirect consequence of other broken thinking. In contrast, in a sufficiently decentralized world when one power commits suicide, its place and resources tend to be taken by other powers who have not committed suicide. Competition and selection is a robust long-term solution to suicide, in a way that centralized governance is not.

This is my tentative best guess for the largest future filter that we face, and that other alien civilizations have faced. The temptation to form central governments and other governance mechanisms is strong, to solve immediate coordination problems, to help powerful interests gain advantages via the capture of such central powers, and to sake the ambition thirst of those who would lead such powers. Over long periods this will seem to have been a wise choice, until suicide ends it all and no one is left to say “I told you so.”

Divide the trillions of future years over which we want to last over the increasingly short periods over which moods and sanity changes, and you see a serious problem, made worse by the lack of a sufficiently long view to make us care enough to solve it. For example, if the suicide mood of a universal government changed once a second, then it needs about 1020 non-suicide moods in a row to last a trillion years.

GD Star Rating
loading...
Tagged as: , ,

My Poll, Explained

So many have continued to ask me the same questions about my recent twitter poll, that I thought I’d try to put all my answers in one place. This topic isn’t that fundamentally interesting, so most you you may want to skip this post.

Recently, Christine Blasey Ford publicly accused US Supreme Court nominee Brett Kavanaugh of a sexual assault. This accusation will have important political consequences, however it is resolved. Congress and the US public are now put in the position of having to evaluate the believability of this accusation, and thus must consider which clues might indicate if the accusation is correct or incorrect.

Immediately after the accusation, many said that the timing of the accusation seemed to them suspicious, occurring exactly when it would most benefit Democrats seeking to derail any nomination until after the election, when they may control the Senate. And it occurred to me that a Bayesian analysis might illuminate this issue. If T = the actual timing, A = accurate accusation, W = wrong accusation, then how much this timing consideration pushes us toward final beliefs is given by the likelihood ratio p(T|W)/p(T|A). A ratio above one pushes against believing the accusation, while a ratio below one pushes for it.

The term P(T|A) seemed to me the most interesting term, and it occurred to me to ask what people thought about it via a Twitter poll. (If there was continued interest, I could ask another question about the other term.) Twitter polls are much cheaper and easier for me to do than other polls. I’ve done dozens of them so far, and rarely has anyone objected. Such polls only allow four options, and you don’t have many characters to explain your question. So I used those characters mainly to make clear a few key aspects of the accusation’s timing:

Many claimed that my wording was misleading because it didn’t include other relevant info that might support the accusation. Like who else the accuser is said to have told when, and what pressures she is said to have faced when to go public. They didn’t complain about my not including info that might lean the other way, such as low detail on the claimed event and a lack of supporting witnesses. But a short tweet just can’t include much relevant info; I barely had enough characters to explain key accusation timing facts.

It is certainly possible that my respondents suffered from cognitive biases, such as assuming too direct a path between accuser feelings and a final accusation. To answer my poll question well, they should have considered many possible complex paths by which an accuser says something to others, who then tell others people, some of which then chose when to bring pressure back on that accuser to make a public accusation. But that’s just the nature of any poll; respondents may well not think carefully enough before answering.

For the purposes of a Twitter poll, I needed to divide the range from 0% to 100% into four bins.
I had high uncertainty about where poll answers would lie, and for the purpose of Bayes rule it is factors that matter most. So I choose three ranges of roughly a factor of 4 to 5, and a leftover bin encompassing an infinite factor. If anything, my choice was biased against answers in the infinite factor bin.

I really didn’t know which way poll answers would go. If most answers were high fractions, that would tend to support the accusation, while if most answers were low fractions, that would tend to question the accusation. Many accused me of posting the poll in order to deny the accusation, but for that to work I would have needed a good guess on the poll answers. Which I didn’t have.

My personal estimate would be somewhere in the top two ranges, and that plausibly biased me to pick bins toward such estimates.  As two-thirds of my poll answers were in the lowest bin I offered, that suggests that I should have offered an even wider range of factors. Some claimed that I biased the results by not putting more bins above 20%. But that fraction is still below the usual four-bin target fraction of 25% per bin.

It is certainly plausible that my pool of poll respondents are not representative of the larger US or world population. And many called it is irresponsible and unscientific to run an unrepresentative poll, especially if one doesn’t carefully show which wordings matter how via A/B testing. But few complain about the thousands of other Twitter polls run every day, or of my dozens of others. And the obvious easy way to show that my pool or wordings matter is to show different answers with another poll where those vary. Yet almost no one even tried that.

Also, people don’t complain about others asking questions in simple public conversations, even though those can be seen as N=1 examples of unrepresentative polls without A/B testing on wordings. It is hard to see how asking thousands of people the same question via a Twitter poll is less informative than just asking one person that same question.

Many people said it is just rude to ask a poll question that insinuates that rape accusations might be wrong, especially when we’ve just seen someone going through all the pain of making one. They say that doing so is pro-rape and discourages the reporting of real rapes, and that this must have been my goal in making this poll. But consider an analogy with discussing gun control just after a shooting. Some say this is rude then to discuss anything but sympathy for victims, but others say this is exactly a good time to discuss gun control. I say that when we must evaluate a specific rape accusation is exactly a good time to think about what clues might indicate in what direction on whether this is an accurate or wrong accusation.

Others say that it is reasonable to conclude that I’m against their side if I didn’t explicitly signal within my poll text  that I’m on their side. That’s just the sort of signaling game equilibrium we are in. And so they are justified in denouncing me for being on the wrong side. But it seems a quite burdensome standard to hold on polls, which already have too few characters to allow an adequate explanation of a question, and it seems obvious that the vast majority of Twitter polls today are not in fact being held to this standard.

Added 24Sep: I thought the poll interesting enough to ask, relative to its costs to me, but I didn’t intend to give it much weight. It was all the negative comments that made it a bigger deal.

Note that, at least in my Twitter world, we see a big difference in attitudes between vocal folks who tweet and those who merely answer polls. That later “silent majority” is more skeptical of the accusation.

GD Star Rating
loading...
Tagged as: , , ,

Allow Covert Eye-Rolls

Authorities, such as parents, teachers, bosses, and police, tend to have both dominance and prestige. Their dominance is usually clear: they can hit you, fire you, or send you to your room. Their prestige tends to be less clear, as that is an informal social consensus on their relevant ability and legitimacy. They have to earn prestige in the eyes of subordinates, and subordinates talk with each other to form a consensus on that. I’ve suggested that we often choose bosses primarily for their prestige indicators, as that allows subordinates to more easily submit to dominance without shame.

There’s a classic scene in fiction where an authority goes too far to squash defiance. (E.g., see video above.) Yes, authorities must respond to overt defiance that interferes with key functions, like a child refusing to come home or a student refusing to stop disrupting class. But usually authorities prefer to suggest actions, rather than to give direct orders. And often subordinates try to use covert signals to tell each other they are less than fully impressed by authority. They might roll their eyes, smirk, slouch, let their attention wander, etc. And sometimes authorities take visible offense at such signs, punishing offenders severely. In extreme cases they may demand not only that everyone seem to be enthusiastically positive in public, they may also plant spies and monitor private talk to punish anyone who says anything remotely negative in private.

This is the scenario of extreme totalitarian dominance, a picture that groups often try to paint about opponents. It is the rationale in the ancient world for why we have good kings but they have evil tyrants, and why we’d be doing them a favor to replace their leaders with ours. More recently, it is the story that the west told on Nazism and Communism. It is even the typical depiction today of historical slavery; it isn’t enough to describe slaves as poor, over-worked, and with few freedoms, they are also shown as also having mean tyrannical owners.

The key problem for authorities is that repressing dissent has the direct effect of discouraging rebellion, but the indirect effect of looking bad. It looks weak to try to stop subordinates from talking frankly about the prestige they think you deserve. Doing this suggests that you don’t think they will estimate your prestige highly. Much better to present the image that most everyone accepts your authority due to your high prestige, and it is only a few malcontent troublemakers who defy you. So most authorities allow subordinate eye-rolls, smirks, negative gossip, etc. as long as they are not too overtly a direct commonly-visible challenge to their authority. They visibly repress overt defiance by one low prestige person or small group, but are wary of simply crushing large respected groups, or hindering their covert gossip. Trying that makes you seem insecure and weak.

In the world of cultural elites today, like arts, journalism, civil service, law, and academia, there’s a dominant culture, and it punishes deviations from its core tenets. But its supporters should be worried about going too far toward totalitarian dominance. They should want to project the image that they don’t need to repress dissent much, as their culture is so obviously prestigious. If the good people are pretty unified in their respect for it, it should be sufficient to punish those who most openly and directly defy it. They shouldn’t seem to feel much threatened by others rolling their eyes.

It is in this context that I think we should worry about the recent obsession with gaslighting and dog-whistles. I’ve posted some controversial tweets recently, and in response others have then publicly attributed to me extreme and culturally-defiant views. (Such as I’m sexist, pro-rapeanti-reporting-of-rape, and seem likely to rape.) When I’ve pointed out that I’ve said no such things and often said the opposite, they often respond with dog-whistle concerns.

That is, they say that there are all these people out there who pretend to submit to culturally dominant views, but who actually harbor sympathy with opposing views. They hide in the shadows communicating with each other covertly, using anonymous internet accounts and secret hand signals. It is so important to crush these rebels that we can’t afford to give anyone the benefit of the doubt to only criticize them for the views they actually say. We must aggressively punish people for even seeming to some people like they might be the sort to secret harbor rebel sympathies. And once everyone knows that we are in a strong repression regime, there’s no excuse for not lying low in abject submission, avoiding any possible hint of forbidden views. If you even touch such topics, you only have yourself to blame for what happens to you.

I hope you can see the problem. Worlds of strong repression are not secure stable worlds. Since everyone knows that authorities are making it hard for others to share opinions on authority prestige, they presume low levels of prestige. So if there’s ever an opening for a rebellion, they expect to see that rebellion. If the boot ever lets up just a bit in stomping the face, it may never get a second change.

Let us instead revert back to the traditional intellectual standard: respond most to what people say, and don’t stretch too hard to infer what you think they mean in scattered hints of what they’ve said and done. Let them roll their eyes and feel each other out for how much they respect the dominant authorities, be that people or culture. As they say:

If you love something set it free. If it comes back it’s yours. If not, it was never meant to be.

GD Star Rating
loading...
Tagged as:

Spaceship Earth Explores Culture Space

Space: the final frontier. These are the voyages of the starship Enterprise. Its five-year mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before. (more)

Many love science fiction stories of brave crews risking their lives to explore strange new spaces, stories much like the older adventure stories about European explorers risking their lives centuries ago to explore new places on Earth. (Yes, often to conquer and enslave the locals.) Many lament that we don’t have as many real such explorer stories today, and they say that we should support more human space exploration now in order to create such real heroic exploration stories. Even though human space exploration is crazy expensive now, and offers few scientific, economic, or humanity-survival gains anytime soon. They say the good stories will be worth all that cost.

Since Henry George first invoked it in 1879, many have used the metaphor of Spaceship Earth to call attention to our common vulnerability and limited resources:

Spaceship Earth … is a world view encouraging everyone on Earth to act as a harmonious crew working toward the greater good. … “we must all cooperate and see to it that everyone does his fair share of the work and gets his fair share of the provisions” … “We travel together, passengers on a little space ship, dependent on its vulnerable reserves of air and soil.” (more)

In this post, I want to suggest that Spaceship Earth is in fact a story of a brave crew risking much to explore a strange new territory. But the space we explore is more cultural than physical.

During the industrial era, the world economy has doubled roughly every fifteen years. Each such doubling of output has moved us into new uncharted cultural territory. This growth has put new pressures on our environment, and has resulted in large and rapid changes to our culture and social organization.

This growth results mostly from innovation, and most innovations are small and well tested against local conditions, giving us little reason to doubt their local value. But all these small changes add up to big overall moves that are often entangled with externalities, coordination failures, and other reasons to doubt their net value.

So humanity continues to venture out into new untried and risky cultural spaces, via changes to cultural conditions with which we don’t have much experience, and which thus risk disaster and destruction. The good crew of Spaceship Earth should carefully weigh these risks when considering where and how fast to venture.

Consider seven examples:

  1. While humans seem to be adapting reasonably well to global warming, we risk big lumpy disruptive changes to Atlantic currents and Antarctic ice. Ecosystems also seem to be adapting okay, but we are risking big collapses to them as well.
  2. While ancient societies gave plenty of status and rewards to fertility, today high fertility behaviors are mostly seen as low status. This change is entwined with complex changes in gender norms and roles, but one result is that human fertility is falling toward below replacement in much of the world, and may fall much further. Over centuries this might produce a drastic decrease in world population, and productivity-threatening decreases in the scale of world production.
  3. While the world has become much more peaceful over the last century, this has been accompanied by big declines in cultural support for military action and tolerance for military losses. Is the world now more vulnerable to conquest by a new military power with more local cultural support and tolerance for losses?
  4. Farmer era self-control and self-discipline has weakened over time, in part via weaker religion. This has weakened cultural support for work and cultural suspicion of self-indulgence in sex, drugs, and media. So we now see less work and more drug addiction. How far will we slide?
  5. Via new media, we are exploring brave new worlds of how to make friends, form identities, achieve status, and learn about the world. As many have noted, these new ways risk many harms to happiness and social capital.
  6. Innovation was once greatly aided by tinkering, i.e., the ability to take apart and change familiar devices. Such tinkering is much less feasible in modern devices. Increasing regulation and risk aversion is also interfering with innovation. Are we as a result risking cultural support for innovation?
  7. Competition between firms has powered rapid growth, but winning bets on intangible capital is allowing leading firms to increasingly dominate industries. Does this undermine the competition that we’ve relied on so far to power growth?

The most common framing today for such issues is one of cultural war. You ask yourself which side feels right to you, commiserate with your moral allies, then puff yourself up with righteous indignation against those who see things differently, and go to war with them. But we might do better to frame these as reasonable debates on how much to risk as we explore culture space.

In a common scene from exploration stories, a crew must decide if to take a big risk. Or choose among several risks. Some in the crew see a risk as worth the potential reward, while others want to search longer for better options, or retreat to try again another day. They may disagree on the tradeoff, but they all agree that both the risks and the rewards are real. It is just a matter of tradeoff details.

We might similarly frame key “value” debates as reasonable differing judgements on what chances to take as spaceship Earth explores culture space. Those who love new changes could admit that we are taking some chances in adopting them so quickly, with so little data to go on, while those who are suspicious of recent changes could admit that many seem to like their early effects. Rather than focus on directly evaluating changes, we might focus more on setting up tracking systems to watch for potential problems, and arranging for repositories of old culture practices that might help us to reverse changes if things go badly. And we might all see ourselves as part of a grand heroic adventure story, wherein a mostly harmonious crew explores a great strange cosmos of possible cultures.

GD Star Rating
loading...
Tagged as: , ,

Sloppy Interior Vs. Careful Border Travel

Imagine that you are floating weightless in space, and holding on to one corner of a large cube-shaped structure. This cube has only corners and struts between adjacent corners; the interior and faces are empty. Now imagine that you want to travel to the opposite corner of this cube. The safe thing to do would be to pull yourself along a strut to an adjacent corner, always keeping at least one hand on a strut, and then repeat that process two more times. If you are in a hurry you might be tempted to just launch yourself through the middle of the cube. But if you don’t get the direction right, you risk sailing past the opposite corner on into open space.

Now let’s make the problem harder. You are still weightless holding on to a cube of struts, but now you live in 1000 dimensional space, in a fog, and subject to random winds. Each corner connects to 1000 struts. Now it would take 1000 single-strut moves to reach the opposite corner, while the direct distance across is only 32 times the length of one strut. You have only a limited ability to tell if you are near a corner or a strut, and now there are over 10300 corners, which look a lot alike. In this case you should be a lot more reluctant to leave sight of your nearest strut, or to risk forgetting your current orientation. Slow and steady wins this race.

If you were part of a group of dozens of people tethered together, it might make more sense to jump across the middle, at least in the case of the ordinary three dimensional cube. If any one of you grabs a corner or strut, they could pull the rest of you in to there. However, this strategy looks a lot more risky in a thousand dimensions with fog and wind, where there are so many more ways to go wrong. Even more so in a million dimensions.

Let me offer these problems as metaphors for the choice between careful and sloppy thinking. In general, you start with what you know now, and seek to learn more, in part to help you make key decisions. You have some degree of confidence in every relevant claim, and these can combine to specify a vector in a high dimensional cube of possible beliefs. Your key choice: how to move within this belief cube.

In a “sloppy interior” approach, you throw together weak tentative beliefs on everything relevant, using any basis available, and then try to crudely adjust them via considerations of consistency, evidence, elegance, rhetoric, and social conformity. You think intuitively, on your feet, and respond to social pressures. That is, a big group of you throw yourselves toward the middle of the cube, and pull on the tethers when you think that could help others get to a strut or corner you see. Sometimes a big group splits into two main groups who have a tug-o-war contest along one main tether axis, because that’s what humans do.

In a “careful border” approach, you try to move methodically along, or at least within sight of, struts. You make sure to carefully identify enough struts at your current corner to check your orientation and learn which strut to take next. Sometimes you “cut a corner”, jumping more than one corner at a time, but only via carefully chosen and controlled moves. It is great when you can move with a large group who work together, as individuals can specialize in particular strut directions, etc. But as there are more different paths to reach the same destination on the border, groups there more naturally split up. If your group seems inclined toward overly risk jumps, you can split off and move more methodically along the struts. Conversely, you might try to cut a corner to jump ahead when others nearby seem excessively careful.

Today public conversations tend more to take a sloppy interior approach, while expert conversations tend more to take a careful border approach. Academics often claim to believe nothing unless it has been demonstrated to the rigorous standards of their discipline, and they are fine with splitting into differing non-interacting groups that take different paths. Outsiders often see academics as moving excessively slowly; surely more corners could be cut with little risk. Public conversations, in contrast, are centered in much larger groups of socially-focused discussants who use more emotional, elegant, and less precise and expert language and reasoning tools.

Yes, this metaphor isn’t exactly right; for example, there is a sense in which we start more naturally from the middle a belief space. But I think it gets some important things right. It can feel more emotionally “relevant” to jump to where everyone else is talking, pick a position like others do there, use the kind of arguments and language they use, and then pull on your side of the nearest tug-o-war rope. That way you are “making a difference.” People who instead step slowly and carefully, making foundations they have sufficient confidence to build on, may seem to others as “lost” and “out of touch”, too “chicken” to engage the important issues.

And yes, in the short term sloppy interior fights have the most influence on politics, culture, and mob rule enforcement. But if you want to play the long game, careful border work is where most of the action is. In the long run, most of what we know results from many small careful moves of relatively high confidence. Yes, academics are often overly careful, as most are more eager to seem impressive than useful. And there are many kinds of non-academic experts. Even so, real progress is mostly in collecting relevant things one can say with high enough confidence, and slowly connecting them together into reliable structures that can reach high, not only into political relevance, but eventually into the stars of significance.

GD Star Rating
loading...
Tagged as: ,

Two Types of Future Filters

In principle, any piece of simple dead matter in the universe could give rise to simple life, then to advanced life, then to an expanding visible civilization. In practice, however, this has not yet happened anywhere in the visible universe. The “great filter” is sum total of all the obstacles that prevent this transition, and our observation of a dead universe tells us that this filter must be enormous.

Life and humans here on Earth have so far progressed some distance along this filter, and we now face the ominous question: how much still lies ahead? If the future filter is large, our changes of starting an expanding visible civilization are slim. While being interviewed on the great filter recently, I was asked what I see as the most likely future filter. And in trying to answer, I realized that I have changed my mind.

The easiest kind of future filter to imagine is a big external disaster that kills all life on Earth. Like a big asteroid or nearby supernovae. But when you think about it, it is very hard to kill all life on Earth. Given how long Earth as gone without such an event, the odds of it happening in the next millions years seems quite small. And yet a million years seems plenty of time for us to start an expanding visible civilization, if we were going to do that.

Yes, compared to killing all life, we can far more easily imagine events that destroy civilization, or kill all humans. But the window for Earth to support life apparently extends another 1.5 billion years into our future. As that window duration should roughly equal the typical duration between great filter steps in the past, it seems unlikely that any such steps have occurred since a half billion years ago, when multicellular life started becoming visible in the fossil record. For example, the trend toward big brains seems steady enough over that period to make big brains unlikely as a big filter step.

Thus even a disaster that kills most all multicellular life on Earth seems unlikely to push life back past the most recent great filter step. Life would still likely retain sex, Eukaryotes, and much more. And with 1.5 billion years to putter, life seems likely to revive multicellular animals, big brains, and something as advanced as humans. In which case there would be a future delay of advanced expanding life, but not a net future filter.

Yes, this analysis is regarding “try-try” filter steps, where the world can just keep repeatedly trying until it succeeds. In principle there can also be “first or never” steps, such as standards that could in principle go many ways, but which lock in forever once they pick a particular way. But it still seems hard to imagine such steps in the last half billion years.

So far we’ve talked about big disasters due to external causes. And yes, big internal disasters like wars are likely to be more frequent. But again the problem is: a disaster that still leaves enough life around could evolve advanced life again in 1.5 billion years, resulting in only a delay, not a filter.

The kinds of disasters we’ve been considering so far might be described as “too little coordination” disasters. That is, you might imagine empowering some sort of world government to coordinate to prevent them. And once such a government became possible, if it were not actually created or used, you might blame such a disaster in part on our failing to empower a world government to prevent them.

Another class of disasters, however, might be described as “too much coordination” disasters. In these scenarios, a powerful world government (or equivalent global coalition) actively prevents life from expanding visibly into the universe. And it continues to do so for as long as life survives. This government might actively prevent the development of technology that would allow such a visible expansion, or it might allow such technology but prevent its application to expansion.

For example, a world government limited to our star system might fear becoming eclipsed by interstellar colonists. It might fear that colonists would travel so far away as to escape the control of our local world government, and then they might collectively grow to become more powerful than the world government around our star.

Yes, this is not a terribly likely scenario, and it does seem hard to imagine such a lockdown lasting for as long as does advanced civilization capable of traveling to other stars. But then scenarios where all life on Earth gets killed off also seem pretty unlikely. It isn’t at all obvious to me that the too little coordination disasters are more likely than the too much coordination disasters.

And so I conclude that I should be in-the-ballpark-of similarly worried about both categories of disaster scenarios. Future filters could result from either too little or too much coordination. To prevent future filters, I don’t know if it is better to have more or less world government.

GD Star Rating
loading...
Tagged as: , ,

Radical Markets

In 1997, I got my Ph.D. in social science from Caltech. The topic that drew me into grad school, and much of what I studied, was mechanism and institution design: how to redesign social practices and institutions. Economists and related scholars know a lot about this, much of which is useful for reforming many areas of life. Alas, the world shows little interest in these reforms, and I’ve offered our book The Elephant in the Brain: Hidden Motives in Everyday Life, as a partial explanation: most reforms are designed to give us more of what we say we want, and at some level we know we really want something else. While social design scholars would do better to work more on satisfying hidden motives, there’s still much useful in what they’ve already learned.

Oddly, most people who say they are interested in radical social change don’t study this literature much, and people in this area don’t much consider radical change. Which seems a shame; these tools are a good foundation for such efforts, and the topic of radical change has long attracted wide interest. I’ve tried to apply these tools to consider big change, such as with my futarchy proposal.

I’m pleased to report that two experts in social design have a new book, Radical Markets: Uprooting Capitalism and Democracy for a Just Society:

The book reveals bold new ways to organize markets for the good of everyone. It shows how the emancipatory force of genuinely open, free, and competitive markets can reawaken the dormant nineteenth-century spirit of liberal reform and lead to greater equality, prosperity, and cooperation. … Only by radically expanding the scope of markets can we reduce inequality, restore robust economic growth, and resolve political conflicts. But to do that, we must replace our most sacred institutions with truly free and open competition—Radical Markets shows how.

While I applaud the ambition of the book, and hope to see more like it, the five big proposals of the book vary widely in quality. They put their best feet forward, and it goes downhill from there. Continue reading "Radical Markets" »

GD Star Rating
loading...
Tagged as: , , ,