Category Archives: Uncategorized

The One Ruler Obsession

I often teach undergraduate law & economics. Sometimes the first paper I assign is to suggest property rules to deal with conflicts regarding asteroids, orbits, and sunlight in the solar system, in the future when there’s substantial activity out there. This feels to students like a complex different situation, and in fact few understand even the basic issues.

Given just two pages to make their case, a large fraction of students (~1/3?) express fear that one person or organization will take over the entire solar system, unless property rules are designed to explicitly prevent that. And a similar fraction suggest the “property rule” of having a single government agency answer all questions. Whatever question or dispute you have, fill out a form, and the agency will decide.

Yet in my lectures I talk a lot about concepts and issues of property rights, but never mention government agency issues or scenarios, nor the scenario of one power taking over everything. And econ undergrads at my school are famous for being relatively libertarian.

I conclude that most people have a strong innate fear of power concentrations, and yet also see the creation of a single central power as an attractive general solution to complicated problems. I’ve seen the same sort of thing with a great many futuristic tech and policy issues. Whatever the question, if it seems complicated, most people are concerned about inequality, especially that it might be taken to the max, and yet they also like the idea of creating a central government-like power to deal with it.

I’ve certainly seen this in concerns about future rampaging robots (= “AI risk”). Many, perhaps most, people express concerns that one AI could take over everything, and many also like the “solution” of one good AI taking over everything.

I recently came across similar reasoning by Frederick Engels back in 1844, in his Outlines of a Critique of Political Economy. Having seen the early industrial revolution, not understanding it well, but fearing where it might lead, Engels claims that the natural outcome is extreme concentration of power. And his solution is to create a different central power (e.g., communism). Of course while there was some increase in inequality and concentration, it wasn’t remotely as bad as Engels feared, except where his words inspired the creation of such concentration. Here is Engels:

Thus, competition sets capital against capital, labour against labour, landed property against landed property; and likewise each of these elements against the other two. In the struggle the stronger wins; and in order to predict the outcome of the struggle, we shall have to investigate the strength of the contestants. First of all, labour is weaker than either landed property or capital, for the worker must work to live, whilst the landowner can live on his rent, and the capitalist on his interest, or, if the need arises, on his capital or on capitalised property in land. The result is that only the very barest necessities, the mere means of subsistence, fall to the lot of labour; whilst the largest part of the products is shared between capital and landed property. Moreover, the stronger worker drives the weaker out of the market, just as larger capital drives out smaller capital, and larger landed property drives out smaller landed property. Practice confirms this conclusion. The advantages which the larger manufacturer and merchant enjoy over the smaller, and the big landowner over the owner of a single acre, are well known. The result is that already under ordinary conditions, in accordance with the law of the stronger, large capital and large landed property swallow small capital and small landed property – i.e., centralisation of property. In crises of trade and agriculture, this centralisation proceeds much more rapidly.

In general large property increases much more rapidly than small property, since a much smaller portion is deducted from its proceeds as property-expenses. This law of the centralisation of private property is as immanent in private property as all the others. The middle classes must increasingly disappear until the world is divided into millionaires and paupers, into large landowners and poor farm labourers. All the laws, all the dividing of landed property, all the possible splitting-up of capital, are of no avail: this result must and will come, unless it is anticipated by a total transformation of social conditions, a fusion of opposed interests, an abolition of private property.

Free competition, the keyword of our present-day economists, is an impossibility. Monopoly at least intended to protect the consumer against fraud, even if it could not in fact do so. The abolition of monopoly, however, opens the door wide to fraud. You say that competition carries with it the remedy for fraud, since no one will buy bad articles. But that means that everyone has to be an expert in every article, which is impossible. Hence the necessity for monopoly, which many articles in fact reveal. Pharmacies, etc., must have a monopoly. And the most important article – money – requires a monopoly most of all. Whenever the circulating medium has ceased to be a state monopoly it has invariably produced a trade crisis; and the English economists, Dr. Wade among them, do concede in this case the necessity for monopoly. But monopoly is no protection against counterfeit money. One can take one’s stand on either side of the question: the one is as difficult as the other. Monopoly produces free competition, and the latter, in turn, produces monopoly. Therefore both must fall, and these difficulties must be resolved through the transcendence of the principle which gives rise to them. (more)

GD Star Rating
a WordPress rating system
Tagged as: ,

Authentic Signals

Many people (including me) claim that we eat food and drink water because without nutrition and fluids we would starve and dehydrate. Imagine this response:

No, people eat food because they are hungry, and drink water because they are thirsty. We don’t need abstract concepts like nutrition and dehydration to explain something so elemental as following our authentic feelings and desires.

Yes hunger and thirst are direct proximate causes of eating and drinking. But we are often interested in finding more distal explanations of such proximate causes. So almost no one objects to the nutrition and dehydration explanations of eating and drinking.

However, one of the most common criticisms I get about signaling explanations of human behavior is that we are instead just following authentic feelings and desires. As in this exchange:

Yes, people don’t need to consciously force themselves to express opinions on many topics. That habit comes quite naturally. Even so, we might want to explain that habit in terms of more basic distal forces.

I’m an economics professor, and the vast majority of economic papers and books that offer explanations for human behaviors don’t bother to distinguish if their explanations are mediated by conscious intentions or not. (In fact, most papers on any topic don’t take a stance on most possible distinctions related to their topic.) Economics are in fact famously wary (too wary I’d say) of survey data, as they fear conscious thoughts can mislead about economic behaviors.

Yet I’ve had even economics colleagues tell me that I should take more care, when I point out possible signaling explanations, to say if I am claiming that such signaling effects are consciously intended. But why would it be more important to distinguish conscious intentions in this context, compared to the rest of economics and social science?

My best guess is that what is going on here is that our social norms disapprove mildly of consciously intended signaling. Just as we aren’t supposed to brag, we also aren’t supposed to do things on purpose to make ourselves look good. It is okay to look good, but only as a side effect of doing things for other reasons. And as we usually claim other reasons for these behaviors, if we are actually doing them for signaling reasons we could also be accused of lying, which is also a norm violation.

Thus many see my signaling explanation proposals as accusing them personally of norm violations. At which point, they become vastly more interested in defending themselves against this accusation than in evaluating my general claims about human behavior. Perhaps if I were a higher status professor publishing in a prestigious journal, they might be reluctant to publicly challenge my claimed focus on distal explanations of general behavior patterns. But for mere tweets or blog posts by someone like me, they feel quite entitled to read me as accusing them of being bad people, unless I explicitly say otherwise. (And perhaps even then.) Sigh.

For the record, the degree of conscious intent of any behavior is a mildly interesting facet, but I’m less interested in it than are most people. This is in part because I’m inclined to give people less of a moral or legal pass on the harms resulting from behaviors if people do not consciously intend such consequences. It is just too easy for people to not notice such consequences, when they find it in their interest to not notice.

GD Star Rating
a WordPress rating system
Tagged as: ,

Villain Markets

Imagine that you have a large pool of cases, where in each case you weakly suspect some sort of villainous stink. But you have a limited investigative resource, which you can only apply to one case, to sniff for stick there.

For example, you might have one reporter, who you could assign for one month to investigate the finances of any one member of Congress. Or you might have one undercover actor, whom you could assign to offer a bribe to one member of the police force of a particular city. Or you might assign a pretty actress to meet with a Hollywood producer, to check for harassment.

Imagine further that you are willing to invite the world to weigh in, to advise you on where to apply your investigative resource. You are willing to say, “Hey world, which of these cases looks stinky to you?” If this is you, then I offer you villain markets.

In a villain market, some investigative resource will be applied at random to one case out of a set of cases. It will report back a verdict, which in the simplest case will be “stinky” or “not stinky”. And before that case is selected for investigation, we will invite everyone to bet anonymously on the chances of stickiness in each case. That is, anyone can bet on the probability that the verdict of case C will be found stinky, given that case C is selected for investigation. So if you have reason to suspect a particular member of Congress, a particular police officer, or a particular Hollywood producer, you might expect to gain by anonymously betting against them.

Imagine that we were sure to investigate case C87, and that the market chance of C87 being found stinky was 2%, but that you believed C87’s stinkiness chances were more like 5%. In this situation, you might expect to profit from paying $3 for the asset “Pays $100 if C87 found stinky”. After your bet, the new market chance might be 4%, reflecting the information you had provided the market via your bet.

Now since we are not sure to investigate case C87, what you’d really do is give up “Pays $3 if C87 investigated” for “Pays $100 if C87 investigated and found stinky.” And you could obtain the asset “Pays $3 if C87 investigated” by paying $3 cash and getting a version of this “Pays $3 if C investigated” investigation asset for every possible case C.

So you could reuse the same $3 to weigh in on the chances of stinkiness in every possible case from the set of possible cases. And not only could you bet for and against particular cases, but you could bet on whole categories of cases. For example, you might bet on the average stinkiness of men, or people older than 60, or people born in Virginia.

To get people to bet on all possible cases C, there needs to be at least some chance of picking every case C in the set of possible cases. But these choice chances do not need to be equal, and they can even depend on the market prices. The random process that picks a case to investigate could set the choice chance to be a strongly increasing function of the market stinkiness chance of each case. As a result, the overall chance of the investigation finding stink could be far above the average market chance across the cases C, and it might even be close to the maximum stinkiness chance.

So far I’ve describe a simple version of villain markets, but many variations are possible. For example, the investigation verdict might choose from several possible levels of stink or villainy. If the investigation could look at several possible areas A, but would have to choose one area from the start, then we might have markets trading assets like “Pays $100 if stink found, and area A of case C is investigated.” The markets would now estimate a chance of stink for each area and case combination, and the random process for choosing cases and areas could depend on the market stinkiness chance of each such combination.

Imagine that a continuing investigative resource were available. For example, a reporter could be assigned each month to a new case and area. A new set of markets could be started again each month over the same set of cases. If an automated market maker were set up to encourage trading in these markets, it could be started each month at the chances in the previous month’s markets just before the randomization was announced.

Once some villain markets had been demonstrated to give well-calibrated market chances, other official bodies who investigate villainy might rightly feel some pressure to take the market stinkiness chances into account when choosing what cases to investigate. Eventually villain markets might become our standard method for allocating investigation resources for uncovering stinking villainy. Which might just make for a much less stinky world.

GD Star Rating
a WordPress rating system
Tagged as:

More Prediction Market Criticism

Back in August I commented on a paper by Mike Thicke that criticized prediction markets:

With each of his reasons, Thicke compares prediction markets to some ideal of perfection, instead of to the actual current institutions it is intended to supplement.

Now Saana Jukola and Henrik Roeland Visser weigh in:

We largely agree on the worry about inaccuracy. .. An alternative worry, which Thicke does not elaborate on, is the fact that peer review .. is also valued for its deliberative nature, which allows it to provide reasons to those affected by the decisions made in research funding or the use of scientific knowledge in politics. .. By pointing out defects and weaknesses in manuscripts or proposals, and by suggesting new ways of approaching the phenomena of interest, peer reviewers are expected to help authors improve the quality of their work. .. peer review .. guards against the biases and blind spots that individual researchers may have. .. Criticism of evidence, methods and reasoning is essential to science, and necessary for arriving at trustworthy results. ..

The severity of the potential obstacles that Thicke and we identify depends on whether science prediction markets would replace traditional methods such as peer review, or would rather serve as addition or even complement to traditional methods. .. Prediction markets do not provide reasons in the way that peer review does, and if the only information that is available are probabilistic predictions, something essential to science is lost. ..

As someone who has often experienced the business end of peer review, I can assure you that peer review far from the most useful channel of criticism for scientists today. And I know of no one who proposes forbidding scientists to talk with or criticize each other! Such talk and criticism was common long before peer review became common in science, and if allowed it should remain common. (Peer review only became common in the last century.) Even in the extreme case (which I have not advocated) where prediction markets were our only channel of research funding, and our only source of scientific consensus.

Jukola and Visser cite my blog post on how markets might pick a best qualitative explanation, but complain:

We could also imagine that there are cases in which science prediction markets are used to select the right answer or at least narrow down the range of alternatives, after which a qualitative report is produced which provides a justification of the chosen answer(s). Perhaps it is possible to infer from trading behavior which investors possess the most reliable information, a possibility explored by Hanson. Contrary to Hanson, we are skeptical of the viability of this strategy. Firstly, the problem of the underdetermination of theory by data suggests that different competing justifications might be compatible with the observation trading behavior. Secondly, such justifications would be post-hoc rationalizations, which sound plausible but might lack power to discriminate among alternative predictions.

Again with comparing an alternative to perfection, and ignoring how existing institutions can also fail such a perfection standard. The under-deterimination of theory by data, and a temptation toward post-hoc rationalization, can exist in all other institutions one might use to elicit explanations. Jukola and Visser make no attempt to argue that prediction markets do worse by such criteria.

GD Star Rating
a WordPress rating system
Tagged as:

How Big Future Change?

The world has seen a lot of very big changes over the last few centuries. Many of these changes seem so large, in fact, that it is hard to see how changes over the next few centuries could be remotely as large. For example, many “big swing” parameters have moved from one extreme to the other, changing by more than half of the total range possible for that parameter. So the only way future changes could be as large in such a parameter is if it completely reversed direction to move back to the opposite extreme.

For example, once only a small percentage of people lived in cities; now more than half do. Once only a few nations were democratic, now more than half are. Once many people were slaves, now there are very few slaves. Once people worked nearly as many hours a week as possible, now they work less than half of their waking hours. Once nations were frequently at war, now war is rare. Once lifespans were near 30 years, now they are near 80, and some say 120 is the max possible. Once few people could read, now most can. Once genders and races were treated quite unequally, now treatment is more equal than unequal. Once engines and solar cells had low efficiency, now efficiency is half or more of the theoretical maximum. And so on.

If these big-swing parameters encompassed most of what we cared about in change, and if it is in fact implausible for such parameters to reverse back to their opposite extremes, then the conclusion seems inescapable: future change must be less than past change.

But pause to ask: how sure can we be that these big swing parameters encompass a large fraction of what matters within what can change? And notice a big selection effect: even when rates of change are constant overall, the particular parameters that happened to change the most in the recent past will in general not be the ones that change the most in the near future. So for those big past changing params future change will be less, even though overall rates of change stay steady. Maybe we spend so much time focusing on the parameters that have recently changed most, that we forget how many other parameters remain which are available to change in the future.

My book Age of Em might be taken as a demonstration that big future change remain possible. And we might also test this selection effect via a historical analysis. We might, for example, look at params that changed the most from the year 500 to the year 1000, at least as people in the year 1000 would have seen them, and then ask if those particular parameters changed more or less during the period from 1000 to 1500. Repeat for many different times and places.

GD Star Rating
a WordPress rating system
Tagged as: ,

All Pay Liability

We could raise government revenue much more efficiently than we now do, with less damage to the economy for any given amount of revenue raised. For example, we could tax fixed characteristics like height instead of income, we could tax traffic congestion a lot more, and we could do better at taxing pollution, including carbon. Recently I posted on a more efficient system of property taxes, that allows more revenue to be raised at a lower cost. Today, I’ll post on a more efficient system of accident liability, which similarly raises more revenue at a lower cost.

Some don’t want me to talk about these things. They hope to “starve the beast” by drying up government revenue sources. That seems to me a lost cause, the sort of logic that pushed radicals toward generic destruction, hoping that eventually the masses would get fed up and revolt. I instead expect a better world overall if governments adopt more efficient policies, including more efficient tax policies.

Regarding accident liability, we want a system that will encourage good levels of care and activity by all who can influence accident rates. For example, regarding car accidents we want drivers to pick good car models, speeds, sleep, and maintenance frequencies. We also want them to take into account the possibility of hurting others via accidents when they choose how often they drive. In addition, we want a system that induces fewer actual court cases, which are expensive, and that asks courts to make fewer judgements, in which they might err.

The simplest system is no liability. Courts just don’t get involved. This has the lowest possible rate of court cases, namely zero. It creates good incentives for accident victims to set their care and activity levels well, but gives rather poor incentives for others to set such things well.

The next simplest system is strict liability. This induces good care and activity by potential injurers, but not from potential victims. It also induces a high rate of court cases; nearly every accident results in a lawsuit. While the parties might settle out of court, if a case goes to trial the court must determine responsibility, i.e., who caused the accident, and how much damages the victim suffered as a result.

Relative to strict liability, systems of negligence cut the rate of court cases, but at the cost of asking courts to make more judgements. As with strict liability, courts must judge who is responsible and victim damage levels. But in addition, courts must also ask themselves if that injurer took enough care to prevent the accident. For each of visible parameter, the courts must judge both the actual level of care taken, and the optimal level of care.If the injurer took enough care overall, that injurer does not owe damages. And if that no damages situation is the usual case, there are fewer court cases, as there are fewer lawsuits.

In practice, however, courts can only look at a small number of injurer choice parameters visible enough to them, such as driving speed. Far more parameters, including all injurer activity level parameters, remain invisible, and so are not considered. Negligence doesn’t create good incentives to set all those less visible parameters.

There are standard variations on these systems, such as allowing contributory negligence on the part of the victim. But all of these systems fail to induce optimal levels of care and activity in someone. We have long known, however, of a simple system that gets pretty much all of these things right, and in addition only asks courts to judge who is responsible for an accident and victim damage levels. (I didn’t invent this system; it is mentioned in many law & econ texts.) In this simple system, courts do not need to consider anyone’s actual or ideal levels of care or activity.

This simple system is to make all responsible parties pay the damage levels of all other parties hurt by the accident. The trick is that they pay all of these amounts to the government, instead of to each other. As each party now internalizes all of the damage suffered by all of the parties, they should choose all their private care and activity levels well. And the government gets more revenue to boot.

The big problem with this all-pay liability system is that none of these responsible parties, including the victims, want to report this accident to the government. They’d all rather pretend it didn’t happen. So the government needs some other way to find out about accidents. In dense areas where they government already has access to mass surveillance systems, they can just use those systems. In other areas, governments might offer bounties to third parties who report accidents, and put strong penalties on those who fail to report their own accidents. Or the system might revert to other liability rules in contexts where governments might otherwise detect accidents too infrequently.

With all-pay liability, we expect a lawsuit for every accident. But in that suit the courts only need to judge who is responsible and victim damage levels. No other judgements need be made. So if we could find simple streamlined ways to make these judgements, this system might not be that expensive to administer. And then we’d have both better accident prevention and more available government revenue.

(Yes, people might want to buy insurance against the risk of making these payments. Yes, if multiple parties could coordinate to prevent accidents together, this system might induce them to spend too much on prevention. Hopefully we could identify such efforts and treat them differently.)

GD Star Rating
a WordPress rating system
Tagged as: ,

Hypocrisy As Key To Class

Two examples of how a key to achieving higher social class is to learn the right kinds of hypocrisy:

Working-class students are more likely to enter college with the notion that the purpose of higher education is learning in the classroom and invest their time and energies accordingly. … This type of academically focused script clashes with the “party” and social cultures of many US colleges. It isolates working and lower middle-class students from peer networks that can provide them with valuable information about how to navigate the social landscape of college as well as future job opportunities. The resulting feelings of isolation and alienation adversely affect these students’ grades, levels of happiness, and likelihood of graduation. … [This] also adversely affects their job prospects. (p.13 Pedigree: How Elite Students Get Elite Jobs)

“There is this automatic assumption in any legal environment that Asians will have a particular talent for bitter labor. … There was this weird self-selection where the Asians would migrate toward the most brutal part of the labor.” By contrast, the white lawyers he encountered had a knack for portraying themselves as above all that. “White people have this instinct that is really important: to give off the impression that they’re only going to do the really important work. You’re a quarterback. It’s a kind of arrogance that Asians are trained not to have.

Someone told me not long after I moved to New York that in order to succeed, you have to understand which rules you’re supposed to break. If you break the wrong rules, you’re finished. And so the easiest thing to do is follow all the rules. But then you consign yourself to a lower status. The real trick is understanding what rules are not meant for you.” This idea of a kind of rule-governed rule-breaking—where the rule book was unwritten but passed along in an innate cultural sense—is perhaps the best explanation I have heard of how the Bamboo Ceiling functions in practice. (more)

GD Star Rating
a WordPress rating system
Tagged as: ,

Yay Stability Rents

Six years ago I posted on the idea of using combinatorial auctions as a substitute for zoning. Since then, news on how badly zoning has been messing up our economy has only gotten worse. I included the zoning combo auction idea in my book The Age of Em, I’ve continued to think about the idea, and last week I talked about it to several LA-based experts in combinatorial auctions.

I’ve been pondering one key design problem, and the solution I’ve been playing with is similar to a solution that also seems to help with patents. I asked Alex Tabarrok, whose office is next door, if he knew of any general discussion of such things, and he pointed me to a long (110 page) 2016 paper called “Property is another name for monopoly” by Eric Posner and Glen Weyl. (See also this technical paper.) And that turned out to be a relatively general argument for using the specific mechanism that I was considering using in zoning combo auctions, get this, as a new standard kind of property right for most everything! Looking for web discussion, I find a few critical responses, and one excellent 2014 Interfuildity post on the basic idea. In this post I’ll go over the basic idea and some of its issues, including two that Posner and Weyl didn’t consider. Continue reading "Yay Stability Rents" »

GD Star Rating
a WordPress rating system
Tagged as: , ,

Markets That Explain, Via Markets To Pick A Best

I recently heard someone say “A disadvantage of prediction markets is that they don’t explain their estimates.” I responded: “But why couldn’t they?” That feature may cost you more, and it hasn’t been explored much in research or development. But I can see how to do it; in this post, I’ll outline a concept.

Previously, I’ve worked on a type of combinatorial prediction market built on a Bayes-Net structure. And there are standard ways to use such a structure to “explain” the estimates of any one variable in terms of the estimates of other variables. So obviously one could just apply those methods directly to get explanations for particular estimates in Bayes-Net based prediction markets. But I suspect that many would see such explanations as inadequate.

Here I’m instead going to try to solve the explanation problem by solving a more general problem: how to cheaply get a single good thing, if you have access to many people willing to submit and evaluate distinguishable things, and you have access to at least one possibly expensive judge who can rank these things. With access to this general pick-a-best mechanism, you can just ask people to submit explanations of a market estimate, and then deliver a single best explanation that you expect to be rated highly by your judge.

In more detail, you need five things:

  1. a prize Z you can pay to whomever submits the winning item,
  2. a community of people willing to submit candidate items to be evaluated for this prize, and to post bonds in the amount B supporting their submissions,
  3. an expensive (cost J) and trustworthy “gold standard” judge who has an error-prone tendency to pick the “better” item out of two items submitted.
  4. a community of people who think that they can guess on average how the judge will rate items, with some of these people being right about this belief, and
  5. a costly (amount B) and only mildly error-prone way to decide if one submission is overly derivative of another.

With these five things, you can get a pretty good thing if you pay Z+J. The more Z you offer, the better will be your good thing. Here is the procedure. First, anyone in a large community may submit candidates c, if they post a bond B for each submission. Each candidate c is publicly posted as it becomes available.

A prediction market is open on all candidates submitted so far, with assets of the form “Pays $1 if c wins.” We somehow define prices pc for such assets which satisfy 1 = pY + Sumc pc, where pY is the price of the asset “The winner is not yet submitted.” Submissions are not accepted after some deadline, and at that point I recommend the candidate c with the highest price pc; that will be a good choice. But to make it a good choice, the procedure has to continue.

A time is chosen randomly from a final time window (such as a final day) after the deadline. We use the market prices pc at that random time to pick a pair of candidates to show the judge. We draw twice randomly (with replacement) using the price pc as the random chance of picking each c. The judge then picks a single tentative winning candidate w out of this pair.

Anyone who submitted a candidate before w can challenge it within a limited challenge time window, claiming that the tentative winner w is overly derivative of their earlier submission e. An amount B is then spent to judge if w is derivative of e. If w is not judged derivative, then the challenger forfeits their bond B, and w remains the tentative winner. If w is judged derivative, then the tentative winner forfeits their bond B, and the challenger becomes a new tentative winner. We need potential challengers to expect a less than B/Z chance of a mistaken judgement regarding something being derivative.

Once all challenges are resolved, the tentative winner becomes the official winner, the person who submitted it is given a large prize Z, and prediction market betting assets are paid off. The end.

This process can easily be generalized in many ways. There could be more than one judge, each judge could be given more than two items to rank, the prediction markets could be subsidized, the chances of picking candidates c to show judges might be non-linear in market prices pc, and when setting such chances prices could be averaged over a time period. If pY is not zero when choosing candidates to evaluate, the prices pc could be renormalized. We might add prediction markets in whether any given challenge would be successful, and allow submissions to be withdrawn before a formal challenge is made.

Now I haven’t proven a theorem to you that this all works well, but I’m pretty sure that it does. By offering a prize for submissions, and allowing bets on which submissions will win, you need only make one expensive judgement between a pair of items, and have access to an expensive way to decide if one submission is overly derivative of another.

I suspect this mechanism may be superior to many of the systems we now use to choose winners. Many existing systems frequently invoke low quality judges, instead of less frequently invoking higher quality judges. I suspect that market estimates of high quality judgements may often be better than direct application of low quality judgements.

GD Star Rating
a WordPress rating system
Tagged as:

A Wonk’s First Question

Imagine you are considering a career as a paid policy wonk. You wonder which policy area to work in, and which institutions to affiliate with. If you want to influence actual policy, rather than just enjoying money and status, your should ask yourself: do those who sponsor or cite efforts in this area do so more to get support for pre-determined conclusions, or more to get info to help them make choices?

Sometimes sponsors and other consumers of a type of policy analysis know what policies they prefer, but seek prestigious cover to help them do it. So they pay and search for policy analyses in the hope of finding the support they need. With enough policy analyses to choose from, they can find ones to support their predetermined conclusions. But they need these prestigious cover analyses to appear, at least to distant observers, to be honest open attempts at discovery. It can’t be obvious that they were designed to support pre-determined conclusions.

At other times, however, sponsors and consumers are actually uncertain, and seek analyses with unpredictable-to-them conclusions to influence their choices. And these are the only cases where your being a policy analyst has a chance of changing real policy outcomes. Such audiences may see your analysis, or be influenced by someone else who has seen them. So for each analysis that you might produce, you should wonder: what are my chances of influencing such an open-minded chooser?

Here are a few clues to consider:

  1. How predictable are the policy conclusions of the most popular policy analysts in this area? High predictability suggests that sponsors reward such consistency, as it aids their efforts to collect support for predetermined conclusions.
  2. How interested are sponsors and other policy consumers in policy analyses done by very prestigious people and institutions, relative to others? The more open they are to products of low prestige analysts, the better the chance they seek information, instead of just prestigious backing for pre-existing conclusions.
  3. How open is this area to funding and otherwise supporting large relevant experiments (or prediction markets)? Or to applying a strong standard theory with standard assumptions, which together often imply specific conclusions? The more that people are willing to endorse the policy implications of such things before their results become known, the more open that area is to unpredictable new information.

It should be possible to collect evidence on how these factors vary across policy areas. Perhaps a simple survey would be sufficient. Might that publicly reveal to all the relative sincerity of different kinds of sponsors and consumers of policy analysis?

GD Star Rating
a WordPress rating system
Tagged as: ,