New-Hire Prediction Markets

In my last post, I suggested that the most promising place to test and develop prediction markets is this: get ordinary firms to pay for mechanisms that induce their associates to advise their key decisions. I argued that what we need most is a regime of flexible trial and error, searching in the space of topics, participants, incentives, etc. for approaches that can add value here while avoiding the political disruptions that have plagued previous trials.

If you had a firm willing to participate in such a process, you’d want to be opportunistic about the topics of your initial trials. You’d ask them what are their most important decisions, and then seek topics that could inform some of those decisions cheaply, quickly, and repeatedly, to allow rapid learning from experimentation. But what if you don’t have such a firm on the hook, and instead seek a development plan to attract many firms?

In this case, instead of planning to curate a set of topics specific to your available firm, you might want to find and focus on a general class of topics likely to be especially valuable and feasible in roughly the same way at a wide range of firms. When focused on such a class, trials at any one firm should be more informative about the potential for trials at other firms.

One plausible candidate is: deadlines. A great many firms have projects with deadlines, and are uncertain on if they will meet those deadlines. They should want to know not only the chance of making the deadline, but how that chance might change if they changed the project’s resources, requirements, or management. If one drills down to smaller sub-projects, whose deadlines tend to be sooner, this can allow for many trials within short time periods. Alas, this topic is also especially disruptive, as markets here tend to block project managers’ favorite excuses for deadline failure.

Here’s my best-guess topic area: new hires. Most small firms, and small parts of big firms, hire a few new people every year, where they pay special attention to comparing each candidate to small pool of “final round” candidates. And these choices are very important; they add up to a big fraction of total firm decision value. Furthermore, most firms also have a standard practice of periodically issuing employee evaluations that are comparable across employees. Thus one could create prediction markets estimating the N-year-later (N=2?) employee evaluation of each final candidate, conditional on their being hired, as advice about whom to hire.
Yes, having to wait two years to settle bets is a big disadvantage, slowing the rate at which trial and error can improve practice. Yes, at many firms employee evaluations are a joke, unable to bear any substantial load of criticism or attention. Yes, you might worry about work colleauges trying to sabotage the careers of new hires that they bet against. And yes, new hire candidates would have to agree to have their application evaluated by everyone in the potential pool of market participants, at least if they reach the final round.

Even so, the value here seems so large as to make it well worth trying to overcome these obstacles. Few firms can be that happy with their new hire choices, reasonably fearing they are missing out on better options. And once you had a system working for final round hire choices, it could plausibly be extended to earlier hiring decision rounds.

Yes, this is related to my proposal to use prediction markets to fire CEOs. But that’s about firing, and this is about hiring. And while each CEO choice is very valuable, there is far more total value encompassed in all the lower personnel choices.

GD Star Rating
loading...
Tagged as: ,

Prediction Markets Need Trial & Error

We economists have a pretty strong consensus on a few key points: 1) innovation is the main cause of long-term economic growth, 2) social institutions are a key changeable determinant of social outcomes, and 3) inducing the collection and aggregation of info is one of the key functions of social institutions. In addition, better institutional-methods for collecting and aggregating info (ICAI) could help with the key meta-problems of making all other important choices, including the choice of our other institutions, especially institutions to promote innovation. Together all these points suggest that one of the best ways that we today could help the future is to innovate better ICAI.

After decades pondering the topic, I’ve concluded that prediction markets (and closely related techs) are our most promising candidate for a better ICAI; they are relatively simple and robust with a huge range of potential high-value applications. But, alas, they still need more tests and development before wider audiences can be convinced to adopt them.

The usual (good) advice to innovators is to develop a new tech first in the application areas where it can attract the highest total customer revenue, and also where customer value can pay for the highest unit costs. As the main direct value of ICAI is to advise decisions, we should thus seek the body of customers most willing to pay money for better decisions, and then focus, when possible, on their highest-value versions.

Compared to charities, governments, and individuals, for-profit firms are more used to paying money for things that they value, including decision advice. And the decisions of such firms encompass a large fraction, perhaps most, of the decision value in our society. This suggests that we should seek to develop and test prediction markets first in the context of typical decisions of ordinary business, slanted when possible toward their highest value decisions.

The customer who would plausibly pay the most here is the decision maker seeing related info, not those who want to lobby for particular decisions, nor those who want to brag about how accurate is their info. And they will usually prefer ways to elicit advice from their associates, instead of from distant curated panels of advisors.

We have so far seen dozens of efforts to use prediction markets to advise decisions inside ordinary firms. Typically, users are satisfied and feel included, costs are modest, and market estimates are similarly or substantially more accurate than other available estimates. Even so, experiments typically end within a few years, often due to political disruption. For example, market estimates can undermine manager excuses (e.g., “we missed the deadline due to a rare unexpected last-minute problem”), and managers dislike seeing their public estimates beaten by market estimates.

Here’s how to understand this: “Innovation matches elegant ideas to messy details.” While general thinkers can identify and hone the elegant ideas, the messy details must usually come from context-dependent trial and error. So for prediction markets, we must search in the space of detailed context-dependent ways to structure and deploy them, to find variations that cut their disruptions. First find variations that work in smaller contexts, then move up to larger trials. This seems feasible, as we’ve already done so for other potentially-politically-disruptive ICAI, such as cost-accounting, AB-tests, and focus groups.

Note that, being atheoretical and context-dependent, this needed experimentation poorly supports academic publications, making academics less interested. Nor can these experiments be enabled merely with money; they crucially need one or more organizations willing to be disrupted by many often-disruptive trials.

Ideally those who oversee this process would be flexible, willing and able as needed to change timescales, topics, participants, incentives, and who-can-see-what structures. An d such trials should be done where those in the org feel sufficiently free to express their aversion to political disruption, to allow the search process to learn to avoid it. Alas, I have so far failed to persuade any organizations to host or fund such experimentation.

This is my best guess for the most socially valuable way to spend ~<$1M. Prediction markets offer enormous promise to realize vast social value, but it seems that promise will remain only potential until someone undertakes the small-scale experiments needed to find the messy details to match its elegant ideas. Will that be you?

GD Star Rating
loading...
Tagged as: , ,

Lottery Lawsuits, For Small Harm Law

Twenty-five years ago I posted a short essay, on which I commented ten years later. Let me now elaborate on an improved variation of that same idea.

Imagine you came out from the grocery store to find a scratch on the side of your car door, a scratch that matches the position of the door on the car next to yours. You estimate they’ve done you $100 of damage. But in our world today this is where the story ends, as it would usually be crazy to spend thousands on a lawyer to sue them for such a small amount. So law today does little to discourage such harms. People can sloppily scratch car doors without fearing that they will have to pay damages.

Now imagine a better world. You take a few pictures of the two cars, including their license plate, and then use a phone app to upload all this and officially declare that they owe you $100 in damages. Using the license plate photo, the car owner is identified and notified, and is issued a “ticket” in that amount, like tickets are now issued for parking violations. If they accept your claim and pay that amount, then it goes to you, and the issue is closed. (Same if they offer you a smaller amount to settle, which you accept.) Unpaid tickets accumulate in the usual way, and the local government uses its usual methods to try to get people to pay them.

The ticket is also settled, and no longer counted as unpaid, if they refuse to accept your claims, but still deposit at least $100. And if they do this, then you must also deposit at least $100. (You each might want to deposit more than this $100 min to help with trial legal fees.)

Both of you also submit a chance, like one in a thousand, and then both of your deposits are converted into lottery claims at the smaller of the two chances. Claims which are then soon (i.e., in a few days) resolved (perhaps via collecting many similar legal cases.) So if the smallest submitted chance was one in a thousand, then 999 times out of a thousand, both of your deposits disappear, and you are both notified that the issue is now settled.

However, one time out of a thousand, you both win the lottery, and then each of your accounts now holds 1000 times what you deposited there. At which point you could also settle the suit.

But if you don’t settle, then your lawsuit goes to trial, and if the court rules that their car door scratch hurt you by $100, then they now owe you $100K, to be paid out of their account. However if the court rules against you, and also affirms their automatic countersuit, that your suit was frivolous, then you now owe them $100K, to be paid out of your account. Once each of you has paid what you owe, any remaining funds in your accounts are returned to you each in cash, tax-free.

In this better world, if they scratch your car, then they expect to pay $100 on average, and you expect to get that on average. But if you just frivolously sue someone for $100, without a plausible prospect of winning, then you expect to pay $100 on average, and they expect to gain that amount. And thus in this world the prospect of such lawsuits changes behavior, toward more optimal care, just as it usually does for large harms today. Law now works to discourage small as well as large harms.

Note that we might want to set some lower limit on allowed lottery chances, such as via a max limit on how much can appear in your account after winning. It also seems fine to let people sell their claims, and also to insure against these lottery risks, perhaps even by depositing money in other accounts to be won exactly when the main lottery is lost. And once notified, defendants should be required to save relevant info on a case until its lottery is resolved.

The key idea here is this: If I’m willing to suffer a lottery risk to sue you, you must also suffer the same lottery risk to defend yourself.

That is, if I claim that you hurt me and am willing to deposit an amount to cover your counter-claim that I’ve sued you frivolously, then I can force you to make (at least) the same size deposit, after which both of our deposits, and also our legal claims against each other, are converted into lottery claims. If we win this lottery, we do a trial the usual way, except now with larger stakes.

GD Star Rating
loading...
Tagged as: ,

Why Did Religion Change?

In his new book How Religion Evolved: And Why It Endures, Robin Dunbar reviews many details of the history and correlates of religion. He says that religion’s main function is to aid group cohesion, and that religion has been key to allowing humans to sustain larger groups. While I have doubts about how well he explains these details, his book gives me an excuse to return to this important topic.

To review his review, many animals travel in groups to protect against outside threats, via treating other group members either generically or via simple status ladders. But primates formed large groups via treating every other member differently, especially via hugs and grooming. Primates thus needed big brains to manage the politics of such groups.

Our especially big brains have allowed human groups to get even larger before fragmenting due to internal conflicts. To further support the social cohesion that can sustain larger groups, we evolved smiles, laughter, singing, dancing, communal eating, drugs, sacrifices, humor, emotional story telling, more rituals, and moralizing gods. And we felt closer to associates with whom we shared language, place of origin, child to adult path, hobbies, worldview, musical taste, and sense of humor.

We also evolved trance/mystic experiences, often with romance-like feelings, and often enhanced by drugs. And we evolved supernatural beliefs, with which we made sense of the world, felt power over it, and could accuse associates of witchcraft. Religion is built especially on these two foundations.

All of this makes sense to me. The puzzle I see is that we’ve seen big changes over time, in which the relative importance of these factors has changed. How can we explain such changes?

Language seems to have arisen about 500Kya. Our earliest spirituality apparently included altered mental states such as trances, and animism, wherein most everything around us had a spirit. Roughly 100Kya our ancestors started to put valuable goods into their graves, suggesting beliefs in an afterlife. Such beliefs seem to go together with ancestor worship, and with shamans who specialize in religion.

Starting roughly 10Kya, with the farming revolution, humans started to live more densely, and built special religious spaces. They also found more potent drugs, such as poppy seeds and beer. We then turned to ritual (often human) sacrifice to capricious gods. In larger communities, we soon after saw social stratification, including a separate classes of priests, especially when food storage was possible. In this kind of religion, rituals were a communal duty, to placate the gods, and individual beliefs were unimportant.

Then starting roughly 4Kya, near the “Axial Age”, we saw the rise of a new kind of religion associated with farming and herding in even larger communities, at the latitudes where such larger communities were possible. Most “traditional” religions of today arose during this ancient era. This type of religion was centered on individual beliefs in moralizing gods described in writings that told of stories and doctrines. Religion became a personal duty, often resulting from a personal choice, and love and forgiveness came to matter more, relative to the sheer power of gods.

Finally, we have recently seen a great and somewhat puzzling decline in religion, apparently in association with rising wealth, even though we still have great needs for group cohesion.

To explain these changes, it helps little to point to the timeless advantages of these many strategies. For example, both gods who punish moral violations and also capricious gods who demand ritual sacrifices seem useful for promoting social cohesion. So why did they arise at the particular times they did? Dunbar doesn’t say much on this.

It seems to me that we must focus on using changes to explain changes. For example, perhaps communal responsibility for ritual sacrifice became much more socially potent when aided by drugs in the new religious spaces built for new larger denser communities. And perhaps personal responsibility and beliefs toward moralizing loving gods became much more socially potent when aided by priests who consulted written stories and doctrines.

If “woke” is a new “religion,” then it seems a complement to drugs, it lacks sacred texts, and it often sacrifices humans to pay for a collective guilt, in front of big crowds in the special big public spaces of social media. And it seems to create a new class of priests, and perhaps also a new stratification of the population. That sure sounds a lot like the religious style of the first half of the farming era; is that style returning now?

GD Star Rating
loading...
Tagged as: ,

Foom Update

To extend our reach, we humans have built tools, machines, firms, and nations. And as these are powerful, we try to maintain control of them. But as efforts to control them usually depend on their details, we have usually waited to think about how to control them until we had concrete examples in front of us. In the year 1000, for example, there wasn’t much we could do to usefully think about how to control most things that have only appeared in the last two centuries, such as cars or international courts.

Someday we will have far more powerful computer tools, including “advanced artificial general intelligence” (AAGI), i.e., with capabilities even higher and broader than those of individual human brains today. And some people today spend substantial efforts today worrying about how we will control these future tools. Their most common argument for this unusual strategy is “foom”.

That is, they postulate a single future computer system, initially quite weak and fully controlled by its human sponsors, but capable of action in the world and with general values to drive such action. Then over a short time (days to weeks) this system dramatically improves (i.e., “fooms”) to become an AAGI far more capable even than the sum total of all then-current humans and computer systems. This happens via a process of self-reflection and self-modification, and this self-modification also produces large and unpredictable changes to its effective values. They seek to delay this event until they can find a way to prevent such dangerous “value drift”, and to persuade those who might initiate such an event to use that method.

I’ve argued at length (1 2 3 4 5 6 7) against the plausibility of this scenario. Its not that its impossible, or that no one should work on it, but that far too many take it as a default future scenario. But I haven’t written on it for many years now, so perhaps it is time for an update. Recently we have seen noteworthy progress in AI system demos (if not yet commercial application), and some have urged me to update my views as a result.

The recent systems have used relative simple architectures and basic algorithms to produce models with enormous numbers of parameters from very large datasets. Compared to prior systems, these systems have produced impressive performance on an impressively wide range of tasks. Even though they are still quite far from displacing humans in any substantial fraction of their current tasks.

For the purpose of reconsidering foom, however, the key things to notice are: (1) these systems have kept their values quite simple and very separate from the rest of the system, and (2) they have done basically zero self-reflection or self-improvement. As I see AAGI as still a long way off, the features of these recent systems can only offer weak evidence regarding the features of AAGI.

Even so, recent developments offer little support for the hypothesis that AAGI will be created soon via the process of self-reflection and self-improvement, or for the hypothesis that such a process risks large “value drifts”. These current ways that we are now moving toward AAGI just don’t look much like the foom scenario. And I don’t see them as saying much about whether ems or AAGI will appear first.

Again, I’m not saying foom is impossible, just that it looks unlikely, and that recent events haven’t made it seem moreso.

These new systems do suggest a substantial influence of architecture on system performance, though not obviously at a level out of line with that in most prior AI systems. And note that the abilities of the very best systems here are not that much better than that of the 2nd and 3rd best systems, arguing weakly against AAGI scenarios where the best system is vastly better.

GD Star Rating
loading...
Tagged as:

What Do We Owe The World?

We each owe some degree of consideration to our close associates, and to larger groups with which we associate. But what do we owe the larger world and universe?

Some of us think we should each put in some effort to improve that universe. Not just that this would be nice, but that we are morally obligated to do so. But how should we think about this obligation?

We could try to collect a long list of specific things different people should do for the world, and how those things vary with context. But is there a simpler more general way to describe these obligations?

We can think of ourself as made up of many smaller selves, at each moment in time, and in each possible world. And then standard expected utility theory says that we maximize a weighted sum across these many sub-selves. So a natural way to include the rest of the universe is to expand this weighted sum to include every other creature, and their many component selves.

The relative weight we put on others might vary with their distance in spacetime, and with their similarity to us. But a general problem with this approach is that in many scenarios we will either want to do near nothing or near everything. If we consider some large group of others, then as we increase the weight we put on members of that group, at first we will want to do very little for them, and then as the weight passes a key threshold we suddenly switch to wanting to put most all of our efforts into helping them. The weight must be finely tuned to induce intermediate efforts.

If intermediate levels of help sound more reasonable, one way to get that is to talk in terms of a budget: we might each have an obligation to spend at least some fixed fraction of our resources helping the world. Resources such as money, time, reputation, etc. The simplest version of this would require the same fraction for everyone, though more complex versions could make this vary with context.

Bryan Caplan’s new book is titled How Evil Are Politicians?, based on this essay wherein he seems to embrace something like a budget obligation story, except with politicians having much larger budget obligations:

If you’re in a position to pass or enforce laws, lives and freedom are in your hands. Common decency requires … politicians to make … intellectual hygiene their top priority. Until they calmly recuse themselves from their society and energetically weigh a wide range of moral arguments, they have no business lifting a political finger. At this point, the iniquity of practicing politicians should be clear. How much time and mental energy does the average politician pour into moral due diligence? A few hours a year seems like a high estimate. They don’t just fall a tad short of their moral obligations. They’re too busy passing laws and giving orders to face the possibility that they’re wielding power illegitimately.

To check on all this, I did a series of Twitter polls asking what fraction of their resources different kinds of people are obligated to spend trying to help the world. Here are the resulting (median of lognormal-fit) % estimates:

The basic % of budget moral framing seems confirmed by many answering these questions and few complaining about the framing. Furthermore, respondents do seem to think this budget varies with type of person, and agree with Caplan that politicians have much higher obligations.

However, respondents had enormously divergent opinions on what is that obligation budget % (median standard dev. is a factor of ~18), and even the middle estimates in the chart above seem to me to vary way too much across types of people. It seems to me unfair to demand far more efforts by others than you are willing to make. And it seems disrespectful to demand far less from other kinds of people, as if you don’t see them as sufficiently human to hold them to moral standards.

This looks to me more like a status story, wherein we try to hold higher status people to higher moral standards, as some sort of “progressive taxation” of status. And while progressive taxation might make sense for governments, having moral obligations vary this strongly with status just doesn’t make much sense of my moral intuitions. We should all try to help others, at least to some similarly modest degree.

Added 10a: The prior numbers in the table were wrong due to a math mistake, now fixed.

GD Star Rating
loading...
Tagged as: ,

The Meaning of Life

Humans act all the time, which implies that they have preferences, i.e. persistent internal structures which say which choices they make in which situations. But humans aren’t usually very good at explaining their preferences. They instead find it hard to give a consistent abstract account that explains their choices. They can act, but can’t say what they want.

One of the things people sometimes say is that they make their choices to gain “meaning”. But they say many different conflicting things about what things actually give “meaning”, different not only between people but even within the same person. That is, people seem quite confused about the “meaning of life”.

If humans are at root pretty similar, then having any one person learn the meaning of their life would seem to be quite informative to everyone else about the meaning of their lives. And a substantial fraction of the many billions of humans who have ever lived have in fact tried to learn about the meaning of their lives. Furthermore, some of these people have claimed to have succeed in discovering this meaning.

Yet no one seem to have persuaded a substantial fraction of humanity to their view on this. Presented solutions to this key questions seem either overly vague or insufficiently supported by evidence in human behavior or words. What can we conclude from this key fact? Let us consider some possible explanations.

One possibility is that there is just no such thing. Human actions are induced by a complex mess of structures that is not reasonably summarized by any abstract coherent shared concept of “meaning”. When people have a feeling of having found “meaning”, that isn’t the result of their matching their lives to such a coherent pre-existing concept, but instead due to yet another complex mess of social and mental processes. We feel “meaning” when that seems to be useful to our minds, but there is no there there. We haven’t found it because it doesn’t exist.

A second possibility is that people have in fact discovered simple abstract expressible truths about the meaning of our lives. But these truths are mostly ugly, and thus not one they are eager to own and tell to others. And when they do tell others, their audiences mostly do not want to hear, and instead prefer to embrace the mistaken claims of those who do not actually know, but instead wishfully offer more aspirational accounts.

And a third possibility, is, what? My mind goes blank here. How could there be simple abstract truth on what gives us meaning, to explain our preferences, and yet either no one among the billions who have looked has ever found it, or when they all do find it they somehow can’t communicate it to others, even though to others this discovery would be quite unobjectionable and pleasing?

GD Star Rating
loading...
Tagged as:

Dealism

We economists, and also other social scientists and policy specialists, are often criticized as follows:

You recommend some policies over others, and thus make ethical choices. Yet your analyses are ethically naive and impoverished, including only a tiny fraction of the relevant considerations known to professional ethicists. Stop it, learn more on ethics, or admit you make only preliminary rough guesses.

My response is “dealism”:

The world is full of competent and useful advisors (doctors, lawyers, therapists, gardeners, realtors, hairstylists, etc.) similarly ignorant on ethics. Yes, much advice says “given options O, choose X to achieve purpose P”, but when they don’t specify purpose P the usual default is not P = “act the most ethically”, but instead P = “get what you want”.

Economists policy recommendations are usually designed to help relatively large groups make better social “deals”, via identifying their “Pareto frontier” (within option subspaces). This frontier is the set of options where some can get more of what they want only via others getting less. We infer what people want via the “revealed preferences” of models that fit their prior choices.

As people can be expected to seek out advice they expect to help them to get what they want, we economists branding ourselves in this way can induce more to seek our advice. We can reasonably want to fill this role. Doing so does not commit us to taking on all possible clients, nor to making any ethical claims whatsoever.

Yes, if people are hypocritical, and pretend to want morality more than they do, they may prefer advisors who similarly pretend. In which case we economists can also pretend that our clients want that, to help preserve their pretensions. But we wouldn’t need to know more about ethics than our clients do, and beneath that veneer of morality, clients likely prefer our advice to be targeted mostly at getting them what they want.

Yes, there are many ways one might argue that this economist’ practice is ethically good. But I make no such arguments here.

Yes, there are other possible ways to help people. Helping them identify deals is not the only way, and often not the best way, to help or advise people.

Most people want in part to be moral, and they think that what they and others want is relevant to what acts are moral. It is just that these two concepts are not identical. If in fact what people want is only and wholly to be ethical, then the difference between being ethical and getting what you want collapses. But even so, this econ approach remains useful, and in this case our advice now also becomes ethical.

The same arguments apply if we replace “be ethical” with “do what you have good reasons to do”. If there is a difference, then others should seek our advice more if it is on what they want, relative to what they have reasons to do.

What if the process of hearing our advice, or following it, can change what people want? (The advice might include a sermon, and doing something can change how you feel about it.) In this case, people will most seek out our advice when those changes in wants match their meta-wants regarding such changes. And those meta-wants are revealed in part via how they choose advisors.

For example, when people choose advisors retrospectively, based on who seems to have been pleased with the advice that they were given, that reveals a preference for changes in wants that make them pleased after the fact. In that case, you’d want to give the advice that resulted in a combination of outcomes and want changes that made them pleased later. In this case they wouldn’t mind changes to their wants, as long as those resulted in their being more pleased.

In contrast, when people choose advisors prospectively, based on how pleased they are now with the outcomes that they expect to result from your advice, then you would only want to offer advice which clients expect to change their wants if such clients expect to be pleased by such changes. So you’d want to offer advice that seemed to promote the want changes that they aspire to, but prevent the want changes that they fear or despise.

And that’s it. Many presume that policy discussions are about morality. But as a policy advisor, you can reasonably take the stance that your advice is not about morality, and that economic analysis is well-suited to the advice role that you have chosen.

GD Star Rating
loading...
Tagged as: , , ,

Hidden Motives In Law

In our book The Elephant in the Brain, Hidden Motives in Everyday Life, Kevin Simler and I first review the reasons to expect humans to often have hidden motives, and then we describe our main hidden motives in each of ten areas of life. In each area, we start with the usual claimed motive, identify puzzles that don’t fit well with that story, and then describe another plausible motive that fits better.

We hoped to inspire others to apply our method to more areas of life, but we have so far largely failed there. So its past time for me to take up that task. And as law & economics is the class I teach most often, that’s a natural first place to start. So what are our motives regarding our official systems for dispute resolution?

Saying the word “justice” doesn’t help much; what does that mean? But the field of law and economics has a standard answer that looks reasonable: economic efficiency. Which in law translates to encouraging cost-benefit-optimal levels of commitment, reliance, care, and activity. And the substantial success of law and economics scholarship suggests that this is in fact an important motive in law. Furthermore, as most everyone can get behind it, this is plausibly our most overt motive regarding law. But we also see many puzzles in law not well explained by this approach. Which suggests to me three other motives.

Back in the forager era, before formal law, disputes were resolved by mobs. That is, the local band talked informally about accusations of norm violations, came to a consensus about what to do, and then implemented that themselves. As this mob justice system has many known failure modes, we probably added law as a partial replacement in order to cut such failures. Thus a plausible secondary motive in law is to try to minimize the common failings of mob justice, and to insulate the legal system from mob influence.

The main failure of mob justice is plausibly a rush to judgment; each person in a gossip network has local incentives to accept the stance of whomever first reports an accusation to them. And the most interested parties are far more likely than average to be the first source of the first report someone hears. In response, law seeks to make legal decision makers independent and disconnected from the disputants and their gossip network, and to make such decision markers listen to all the evidence before making their decision. The rule against hearsay evidence is also plausibly to limit the influence of gossip on trials.

Leaders of the legal system often express concerns about its perceived legitimacy, and this makes sense as a third motive of the legal system. And as the most common threat to such legitimacy is widespread criticism of particular legal decisions, many features of law can be understood as ways to avoid such criticism. For example, criticism is likely cut via having legal personnel, venues, and demeanors be maximally prestigious and deferential to legal authorities.

Also, the more complex are legal language and arguments, the harder it becomes for mobs to question them. The longer the delay before final legal decisions, the less passion will remain to challenge them. Finally, the more expensive is the legal process, the fewer rulings there will be to question. Our most official legal systems differ from all our other less official dispute resolutions systems in all of these ways. They are slower, more expensive, less understandable, and more prestigious.

The last hidden motive that I think I see is that each legal jurisdiction wants to look good to outsiders. So most every jurisdiction has laws against widely disapproved behaviors, such as adultery, prostitution, or drinking alcohol on the street, even though such laws are often quite weakly enforced. Most set high standards of proof and adopt the usual rules constraining what evidence can be presented at trial, even though there’s little evidence that these rules help on net.

Most jurisdictions pretend to enforce all laws equally on everyone, but actually give police differential priorities; some locations, suspects, and victims count a lot more than others. It would be quite feasible, and probably lot more efficient, to use a bounty hunting system to enforce laws, and most locals are well aware of these varying priorities. But that would require admitting such differential priorities to outsiders, via explicit differences in the bounties paid. So most jurisdictions prefer government employees, who can be more hypocritical.

Similarly, our usual form of criminal punishment, nice jail, is less efficient than all the other forms, including mean jail, exile, corporal punishment, and fines. Holding constant how averse a convict is to suffer each punishment, nice jail costs the most. Alas, the world has fallen into an equilibrium where any jurisdiction that allows any punishment other than nice jail is declared to be cruel and unjust. Even giving the convict the choice between such punishments is called unjust. So the strong desire to avoid such accusations pushes most jurisdictions into using the least efficient form of punishment.

In sum, I see four big motives in law: encouraging commitment and care, avoiding failings of mob justice, preserving system legitimacy via avoiding clear decisions, and hindering distant observers from accusing a jurisdiction of injustice, even if most locals are not fooled.

One can of course postulate many more possible motives, including diverting revenue and status to legal authorities, preserving and increasing existing inequalities, giving civil authorities more arbitrary powers, and empowering busybodies to meddle in the lives of others. But it isn’t clear to me that these add much more explanatory power, given the above motives.

GD Star Rating
loading...
Tagged as: ,

Will Design Escape Selection?

In the past, many people and orgs have had plans and designs, many which made noticeable differences to the details of history. But regarding most of history, our best explanations of overall trends has been in terms of competition and selection, including between organisms, species, cultures, nations, empires, towns, firms, and political factions.

However, when it comes to the future, especially hopeful futures, people tend to think more in terms of design than selection. For example, H.G. Wells was willing to rely on selection to predict a future dystopia in The Time Machine, but his utopia in Things to Come was the result of conscious planning replacing prior destructive competition. Hopeful futurists have long painted pictures of shiny designed techs, planned cities, and wise cooperative institutions of charity and governance.

Today, competition and selection continue on in many forms, including political competition for the control of governance institutions. But instead of seeing governance, law, and regulation as driven largely by competition between units of governance (e.g., parties, cities, or nations), many now prefer to see them in design terms: good people coordinating to choose how we want to live together, and to limit competition in many ways. They see competition between units of governance as largely passé, and getting more so as we establish stronger global communities and governance.

My future analysis efforts have relied mostly on competition and selection. Such as in Age of Em, post-em AI, Burning the Cosmic Commons, and Grabby Aliens. And in my predictions of long views and abstract values. Their competitive elements, and what that competition produces, are often described by others as dystopian. And the most common long-term futurist vision I come across these days is of a “singleton” artificial general intelligence (A.G.I.) for whom competition and selection become irrelevant. In that vision (of which I am skeptical), there is only one A.G.I., which has no internal conflicts, grows in power and wisdom via internal reflection and redesign, and then becomes all powerful and immortal, changing the universe to match its value vision.

Many recent historical trends (e.g., slavery, democracy, religion, fertility, leisure, war, travel, art, promiscuity) can be explained in terms of rising wealth inducing a reversion to forager values and attitudes. And I see these design-oriented attitudes toward governance and the future as part of this pro-forager trend. Foragers didn’t overtly compete with each other, but instead made important decisions by consensus, and largely by appeal to community-wide altruistic goals. The farming world forced humans to more embrace competition, and become more like our pre-human ancestors, but we were never that comfortable with it.

The designs that foragers created, however, were too small to reveal the key obstacle to this vision of civilization-wide collective design to over-rule competition: rot (see 1 2 3 4). Not only is it quite hard in practice to coordinate to overturn the natural outcomes of competition and selection, the sorts of complex structures that we are tempted to use to achieve that purpose consistently rot, and decay with time. If humanity succeeds in creating world governance strong enough to manage competition, those governance structures are likely to prevent interstellar colonization, as that strongly threatens their ability to prevent competition. And such structures would slowly rot over time, eventually dragging civilization down with them.

If competition and selection manages to continue, our descendants may become grabby aliens, and join the other gods at the end of time. In that case one of the biggest unanswered question is: what will be the key units of future selection? How will those units manage to coordinate, to the extent that they do, while still avoiding the rotting of their coordination mechanisms? And how can we now best promote the rise of the best versions of such competing units?

GD Star Rating
loading...
Tagged as: , ,