Monthly Archives: November 2019

Designing Crime Bounties

I’ve been thinking about how to design a bounty system for enforcing criminal law. It is turning out to be a bit more complex than I’d anticipated, so I thought I’d try to open up this design process, by telling you of key design considerations, and inviting your suggestions.

The basic idea is to post bounties, paid to the first hunter to convince a court that a particular party is guilty of a particular crime. In general that bounty might be paid by many parties, including the government, though I have in mind a vouching system, wherein the criminal’s voucher pays a fine, and part of that goes to pay a bounty. 

Here are some key concerns:

  1. There needs to be a budget to pay bounties to hunters.
  2. We don’t want criminals to secretly pay hunters to not prosecute their crimes.
  3. We may not want the chance of catching each crime to depend lots on one hunter’s random ability. 
  4. We want incentives to adapt, i.e., use the most cost-effective hunter for each particular case. 
  5. We want incentives to innovate, i.e., develop more cost-effective ways to hunt over time. 
  6. First hunter allowed to see a crime scene, or do an autopsy, etc., may mess it up for other hunters. 
  7. We may want suspects to have a right against double jeopardy, so they can only be prosecuted once.
  8. Giving many hunters extra rights to penetrate privacy shields may greatly reduce effective privacy.
  9. It may be a waste of time and money for several hunters to simultaneously pursue the same crime. 
  10. Witnesses may chafe at having to be interviewed by several hunters re the same events.

In typical ancient legal systems, a case would start with a victim complaint. The victim, with help from associates, would then pick a hunter, and pay that hunter to find and convict the guilty. The ability to sell the convicted into slavery and to get payment from their families helped with 1, but we no longer allow these, making this system problematic. Which is part of why we’ve added our current system. Victims have incentives to address 2-4, though they might not have sufficient expertise to choose well. Good victim choices give hunters incentive to address 5. The fact that victims picked particular hunters helped with 6-10. 

The usual current solution is to have a centrally-run government organization. Cases start via citizen complaints and employee patrols. Detectives are then assigned mostly at random to particular local cases. If an investigation succeeds enough, the case is given to a random local prosecutor. Using government funds helps with 1, and selecting high quality personnel helps somewhat with 3. Assigning particular people to particular cases helps with 6-10.  Choosing people at random, heavy monitoring, and strong penalties for corruption can help with 2. This system doesn’t do so well on issues 4-5. 

The simplest way to create a bounty system is to just authorize a free-for-all, allowing many hunters to pursue each crime. The competition helps with 2-5, but having many possible hunters per crime hurts on issues 6-10. One way to address this is to make one hunter the primary hunter for each crime, the only one allowed any special access and the only one who can prosecute it. But there needs to be a competition for this role, if we are to deal well with 3-5.

One simple way to have a competition for the role of primary hunter of a crime is an initial auction; the hunter who pays the most gets it. At least this makes sense when a crime is reported by some other party. If a hunter is the one to notice a crime, it may make more sense for that hunter to get that primary role. The primary hunter might then sell that role to some other hunter, at which time they’d transfer the relevant evidence they’ve collected. (Harberger taxes might ease such transfers.)

Profit-driven hunters help deal with 3-5, but problem 2 is big if selling out to the criminal becomes the profit-maximizing strategy. That gets especially tempting when the fine that the criminal pays (or the equivalent punishment) is much more than the bounty that the hunter receives. One obvious solution is to make such payoffs a crime, and to reduce hunter privacy in order to allow other hunters to find and prosecute violations. But is that enough?

Another possible solution is to have the primary hunter role expire after a time limit, if that hunter has not formally prosecuted someone by then. The role could then be re-auctioned. This might need to be paired with penalties for making overly weak prosecutions, such as loser-pays on court costs. And the time delay might make the case much harder to pursue.

I worry enough about issue 2 that I’m still looking for other solutions. One quite different solution is to use decision markets to assign the role of primary hunter for a case. Using decision markets that estimate expected fines recovered would push hunters to accumulate track records showing high fine recovery rates. 

Being paid by criminals to ignore crimes would hurt such track records, and thus such corruption would be discouraged. This approach could rely less on making such payoffs illegal and on reduced hunter privacy. 

The initial hunter assignment could be made via decision markets, and at any later time that primary role might be transferred if a challenger could show a higher expected fine recovery rate, conditional on their becoming primary. It might make sense to require the old hunter to give this new primary hunter access to the evidence they’ve collected so far. 

This is as far as my thoughts have gone at the moment. The available approaches seem okay, and probably better than what we are doing now. But maybe there’s something even better that you can suggest, or that I will think of later. 

GD Star Rating
loading...
Tagged as: , , ,

Social Roles Make Sense

The modern world relies greatly on a vast division of labor, wherein we each do quite different tasks. Partially as a result, we live in different places, have different lifestyles, and associate with different people. The ancient world also had a division of labor, but in addition to doing different tasks, people tended to have expectations about what kinds of people would tend to do what kinds of tasks, live where, and associate with who. Often strong expectations. Such expectations can be called social “roles”.

For example, in a society with “gender roles”, there are widely shared expectations regarding the kinds of tasks that women do, relative to men. In some societies these expectations have been so strong that all women were strongly and directly prevented from doing any other tasks. But more commonly, expectations could often be violated, if one paid a sufficient price. Similarly, ancient societies often had roles related to family, ethnicity, class, age, body plan, personality, and geographic location. People who started life with particular values of these parameters were channeled into particular tasks, places, training regimes, and associations, choices that tended to support their doing particular future tasks, with matching lifestyles, associations, etc.

When there is an existing pattern of what sorts of people tend to do what tasks and fill what social slots, then it is natural and cost-reducing to at least weakly use those patterns to predict what sorts of people will do well at what tasks in the near future. Furthermore, it is natural and cost-reducing to at least weakly use future task expectations to decide the locations, training, associations, etc., of people earlier in life.

It seems obvious to me that it is possible to have both overly weak and overly strong social roles. With overly strong social roles, we rely too much on initial expectations, experiment too little with alternate allocations, and act too little on any info we acquire about people as their lives progress. But with overly weak social roles, we rely too little on easily accessible info on what sorts of people are likely to end up well-suited to particular roles.

For example, consider climate roles. If you grow up in a particular climate, there’s a better than random chance that you will live in a similar climate when you are older. So it makes sense early in life for you to adapt to that climate in your habits and attitudes. When people are looking later for someone to live or work in that climate, it makes sense for them to prefer people already experienced with that climate. Part of this could be genetic, in that people with genes well suited to a climate may have been previously preferentially selected to live there. But it mostly doesn’t matter the cause; it just makes sense to respond to these patterns in the obvious way.

(Yes, sometimes one will want to pick people who seem especially badly-matched to certain tasks or context, just to experiment and check one’s assumptions about matching. But such experiments are unusual as choices.)

Of course the world may sometimes stumble into inefficient equilibria, wherein we keep tending to assign certain sorts of people to certain tasks, when we’d be even better off with some other pattern of who does what. In such cases we might try to break out of previous patterns, in part via discouraging people from using some features as cues to assigning some aspects of tasks, locations, associations, etc. This is one possible justification for “anti-discrimination” rules and laws.

But this certainly doesn’t justify a general prohibition on any sorts of social roles whatsoever. And any decisions based on theories saying that we were in inefficient equilibria should be periodically re-examined, to see if observed patterns of who seems to be good at what support such theories. We might have been mistaken. And unless there is some market failure that we must continually fight against, we should expect to need anti-discrimination rules only for a limited time, until new and better equilibria can be reached.

Yes, among the features that we can use to estimate who is fit for what roles, some of those features are easier for individuals to change, while others are harder to change. However, it isn’t clear why this distinction matters that much re the suitability of such features for task assignment. Even when features can change, there will be a cost of such changes, and so it will often be more cost-effective to use people who already have the suitable features, instead of getting other people to change to become suitable.

From a conversation with John Nye.

GD Star Rating
loading...
Tagged as: ,

What Info Is Verifiable?

For econ topics where info is relevant, including key areas of mechanism design, and law & econ, we often make use of a key distinction: verifiable versus unverifiable info. For example, we might say that whether it rains in your city tomorrow is verifiable, but whether you feel discouraged tomorrow is not verifiable. 

Verifiable info can much more easily be the basis of a contract or a legal decision. You can insure yourself against rain, but not discouragement, because insurance contracts can refer to the rain, and courts can enforce those contract terms. And as courts can also enforce bets about rain, prediction markets can incentivize accurate forecasts on rain. Without that, you have to resort to the sort of mechanisms I discussed in my last post. 

Often, traffic police can officially pull over a car only if they have a verifiable reason to think some wrong has been done, but not if they just have a hunch. In the blockchain world, things that are directly visible on the blockchain are seen as verifiable, and thus can be included in smart contracts. However, blockchain folks struggle to make “oracles” that might allow other info to be verifiable, including most info that ordinary courts now consider to be verifiable. 

Wikipedia is a powerful source of organized info, but only info that is pretty directly verifiable, via cites to other sources. The larger world of media and academia can say many more things, via its looser and more inclusive concepts of “verifiable”. Of course once something is said in those worlds, it can then be said on Wikipedia via citing those other sources.

I’m eager to reform many social institutions more in the direction of paying for results. But these efforts are limited by the kinds of results that can be verified, and thus become the basis of pay-for-results contracts. In mechanism design, it is well known that it is much easier to design mechanisms that get people to reveal and act on verifiable info. So the long term potential for dramatic institution gains may depend crucially on how much info can be made verifiable. The coming hypocralypse may result from the potential to make widely available info into verifiable info. More direct mind-reading tech might have a similar effect. 

Given all this reliance on the concept of verifiability, it is worth noting that verifiability seems to be a social construct. Info exists in the universe, and the universe may even be made out of info, but this concept of verifiability seems to be more about when you can get people to agree on a piece of info. When you can reliably ask many difference sources and they will all confidently tell you the same answer, we tend to treat that as verifiable. (Verifiability is related to whether info is “common knowledge” or “common belief”, but the concepts don’t seem to be quite the same.)

It is a deep and difficult question what actually makes info verifiable. Sometimes when we ask the same question to many people, they will coordinate to tell us the answer that we or someone wants to hear, or will punish them for contradicting. But at other times when we ask many people the same question, it seems like their best strategy is just to look directly at the “truth” and report that. Perhaps because they find it too hard to coordinate, or because implicit threats are weak or ambiguous. 

The question of what is verifiable opens an important meta question: how can can we verify claims of verifiability? For example, a totalitarian regime might well insist not only that everyone agree that the regime is fair and kind, a force for good, but that they agree that these facts are clear and verifiable. Most any community with a dogma may be tempted to claim not only that their dogma is true, but also that it is verifiable. This can allow such dogma to be the basis for settling contract disputes or other court rulings, such as re crimes of sedition or treason.

I don’t have a clear theory or hypothesis to offer here, but while this was in my head I wanted to highlight the importance of this topic, and its apparent openness to investigation. While I have no current plans to study this, it seems quite amenable to study now, at least by folks who understand enough of both game theory and a wide range of social phenomena.  

Added 3Dec: Here is a recent paper on how easy mechanisms get when info is verifiable.

GD Star Rating
loading...
Tagged as: , ,

A New Truth Mechanism

Early in 2017 I reported:

This week Nature published some empirical data on a surprising-popularity consensus mechanism. The idea is to ask people to pick from several options, and also to have each person forecast the distribution of opinion among others. … Compared to prediction markets, this mechanism doesn’t require that those who run the mechanism actually know the truth later. … The big problem … however, is that it requires that learning the truth be the cheapest way to coordinate opinion. …. I can see variations on [this method] being used much more widely to generate standard safe answers that people can adopt with less fear of seeming strange or ignorant. But those who actually want to find true answers even when such answers are contrarian, they will need something closer to prediction markets.

In a new mechanism by Yuqing Kong, N agents simultaneously and without communication give answers to T questions, each of which has C possible answers. The clues that agents have about each question can be arbitrarily correlated, and agents can have differing priors about that clue distribution. However, clues must be identically and independently distributed (IID) across questions. If T ≥ 2C and N ≥ 2, then in this new mechanism telling the “truth” (i.e., answer indicated by clue) is a dominant strategy, with a strictly higher payoff if anyone else also tells the truth!

This is a substantial advance over the prior literature, and I expect future mechanisms to weaken the IID across questions constraint. Alas, even so this seems to suffer for the same key problem of needing truth to be the cheapest way for respondents to coordinate answers. I expect this problem to be much harder to overcome.

Of course if you add “truth speakers” as some of the agents, and wait for those speakers’ input before paying the other participants, you get something much closer to a prediction market.

GD Star Rating
loading...
Tagged as:

Occam’s Policy Razor

Nine experiments provide support for promiscuous condemnation: the general tendency to assume that ambiguous actions are immoral. Both cognitive and functional arguments support the idea of promiscuous condemnation. (More)

The world is full of inefficient policies. But why, when many can can simply and clearly explain why such policies are inefficient? The following concrete example suggests a simple explanation:

Logically, it doesn’t seem cruel to offer someone an extra option, if you don’t thereby change their other options. Two thirds of poll respondents agree re this prisoner case. However, 94% also think that the world media would roast any nation who did this, and they’d get away with it. And I agree with these poll respondents in both cases.

Most of the audience of that world media would not be paying close attention, and would not care greatly about truth. They would instead make a quick and shallow calculation: will many find this accusation innately plausible and incendiary enough to stick, and would I like that? If they answer is yes, they add their pitchforks to the mob. That’s the sort of thing I’ve seen with internet mobs lately, and also with prior media mobs.

As most of the world is eager to call the United States an evil empire driven by evil intent, any concrete U.S. support for torture might plausibly be taken as evidence for such evil intent, at least to observers who aren’t paying much attention. So even those who know that in such cases allowing torture can be better policy would avoid supporting it. Add in large U.S. mobs who are also not paying attention, and who might like to accuse U.S. powers of ill intent, and we get our situation where almost no one is willing to seriously suggest that we offer torture substitutes for prison. Even though that would help.

Similar theories can explain many other inefficient policies, such as laws against prostitution, gambling, and recreational drugs. We might know that such policies are ineffective and harmful, and yet not be able to bring ourselves to publicly support ending such bans, for fear of being accused of bad intent. This account might even explain policies to punish the rich, big business, and foreigners. The more that contrary policies could be spun to distracted observers as showing evil intent, the more likely such inefficient policies are to be adopted.

Is there any solution? Consider the example of Congress creating a commission to recommend which U.S. military bases to close, where afterward Congress could only approve or reject the whole list, without making changes. While bills to close individual bases would have been met with fierce difficult-to-overcome opposition, this way to package base closings into a bundle allowed Congress to actually close many inefficient bases.

Also consider how a nation can resist international pressure to imprison one disliked person, or to censor one disliked book. In the first case the nation may plead “we follow a general rule of law, and our law has not yet convicted this person”, while in the second case the nation may plead “We have adopted a general policy of free speech, which limits our ability to ban individual books.”

I see a pattern here: simpler policy spaces, with fewer degrees of freedom, are safer from bias, corruption, special-pleading, and selfish lobbying. A political system choosing from a smaller space of possible policies that will then apply to a large range of situations seems to make more efficient choices.

Think of this as Occam’s Policy Razor. In science, Occam’s Theory Razor says to pick the simplest theory that can fit the data. Doing this can help fractious scientific communities to avoid bias and favoritism in theory choice. Similarly, Occam’s Policy Razor says to limit policy choices to the smallest space of policies which can address the key problems for which policies are needed. More complexity to address complex situation details is mostly not worth the risk. This policy razor may help fractious political communities to avoid bias and favoritism in policy choice.

Yes, I haven’t formalized this much, and this is still a pretty sloppy analysis. And yes, there are in fact many strong criticisms of Occum’s Razor in science. Even so, it feels like there may be something to this. And futarchy seems to me a good example of this principle. In a futarchy with a simple value function based on basic outcomes like population, health, and wealth, then voting on values but betting on beliefs would probably mostly legalize things like prostitution, gambling, recreational drugs, immigration, and big business. It would probably even let prisoners pick torture.

Today we resist world mob disapproval regarding particular people we don’t jail, or particular books we don’t ban, by saying “Look we have worked out general systems to deal with such things, and it isn’t safe for us to give some folks discretion to make exceptions just because a mob somewhere yells”. Under futarchy, we might similarly resist world disapproval of our prostitution, etc. legalization by saying:

Look, we have chosen a simple general system to deal with such things, and we can’t trust giving folks discretion make policy exceptions just because talking heads somewhere scowl. So far our system hasn’t banned those things, and if you don’t like that outcome then participate in our simple general system, to see if you can get your desired changes by working through channels.

By limiting ourselves to simple general choices, we might also tend to make more efficient choices, to our overall benefit.

GD Star Rating
loading...
Tagged as: ,

Prestige Blocks Reform

At several recent conferences, I suggested to the organizers that I talk about social institution innovation, but they preferred I talk about my tech related work (or not talk at all). At those events they did have other people talk about social reforms and innovations, and all those speakers were relatively high status people with a background in “hard” sciences (e.g., physics or computers science). And to my eyes, their suggestions and analysis were amateurish.

Curious about this pattern, I did these Twitter polls:

So while more of us would rather hear about social analysis from a social expert, more of us would rather hear about social reform proposals from prestigious hard scientists. This makes sense if we see reform as a social coordination game: if we only want to support reforms that we expect to be supported by many high status folks, we need a high status advocates to be our focal point to get the ball rolling.

Alas, since hard scientists tend to know little social science and to think little of social scientists, the reforms they suggest tend to be low quality, at least by social scientist standards. Furthermore, since prestige-driven social systems have done well for them personally, and are said to do well in running their hard science world, they will tend to promote such systems as reforms. Alas, as I think replacing such systems should be one of our main social reform priorities.

GD Star Rating
loading...
Tagged as: ,

Firms & Cities Have Open Borders

Cities usually don’t much limit who can move there. If you can find someone in a city to give you a job, to rent you an apartment, to sell you food and other stuff, and to be your friends, etc. and if you can pay for your move, then you can move to that city from anywhere in a much larger region. Of course individual employers, landlords, and stores are mostly free to reject you, but the city doesn’t add much in the way of additional requirements. Same for other units smaller than a nation, such as counties and states.

Large firms also don’t usually much limit who can work there. Oh each particular small work group is usually particular about who works there, but the larger firm will mostly defer to local decisions about hires. Yes, if the larger firm has made commitments to trying never to fire anyone, but to always find someone another place in the firm when they are no longer wanted in any one part, then that larger firm may put more limits on who and how many folks can be hired by any one small group. But when the larger firm has few obligations to local workers, then local groups are also mostly free to hire who they want.

The obvious analogies between cities, firms and nations make it somewhat puzzling that nations are much more eager to limit who can enter them. The analogy is strongest when those who enter nations can only do so in practice if they can find local employers, landlords, suppliers, friends, etc. willing to deal with them. And when the nation assigns itself few obligations to anyone who happens to live there.

Yes, in principle there can be externalities whereby the people who enter one part of a nation effect the enjoyment and productivity of people in other parts of a nation. But those same sort of effects should also appear within parts of a city or of a firm. So why don’t cities and firms work harder to limit local choices of who can enter them?

You might claim that cities and firms don’t need to attend to limiting entry because nations already do it for them. But most nations already have a lot of internal variation; why is none of that variation of interest to cities and firms, yet the variation between nations would supposedly be of huge interest, if nations were not handling that? And firms and cities within the nations that hold all those bad people that you think good nations are focused on excluding don’t have firm- or city-wide exclusion policies either.

Furthermore, many multinational firms today already have employees who are spread across a great many nations, nations that vary a lot in wealth, development, etc. Yet such multinationals usually don’t have much in the way of centralized limits on who can be hired by their divisions in different firms, nor on who can be transferred between such divisions. These firms may face limits imposed by nations, but they seem to mostly lament such limits, and aren’t eager to add more.

Firms and cities live in more competitive environments than do nations. So we should expect their behavior to be shaped more by competitive pressures. Thus we can conclude that competitive entities tend not to create entity-wide limits on who can enter them; they are mostly content to let smaller parts of them made those decisions.

So if nations act differently from firms and cities, that should be because either:
1) there are big important effects that are quite different at the national level, than at firm and city levels, or
2) nations are failing to adopt policies that competition would induce, if they faced more competition.

My bet is on the latter. In that case, the key question is: is there a market failure that makes the entry policies that competition pushes lamentable? If not, we should lament that competition isn’t inducing more free entry into nations. That is, we should lament that competition isn’t inducing national open borders, like we mostly have for cities and firms.

GD Star Rating
loading...
Tagged as: , , ,