Category Archives: Uncategorized

Automation As Colonization Wave

Our automation data analysis found a few surprising results. We found that labor demand is inversely correlated with education. As if, when facing a labor shortage for a particular kind of worker, employers respond in part by lowering education requirements. And even though more automation directly changes lowers demand for a job, it seems that the relative fraction of labor demand changes, relative to labor supply changes, are *smaller for jobs for which automaton rises more.

But the most interesting surprise, I think, is that while, over the last twenty years, we’ve seen no noticeable change in the factors that predict which jobs get more automated, we have seen job features change to become more suitable to automation. On average jobs have moved by about a third of a standard deviation, relative to the distribution of job automation. This is actually quite a lot. Why do jobs change this way?

Consider the example of a wave of human colonization moving over a big land area. Instead of all the land becoming colonized more densely at same rate everywhere, what you instead see is new colonization much more happening near old colonization. In the U.S., dense concentrations started in the east and slowly spread to the west. There was little point in clearing land to grow stuff if there weren’t enough other folks nearby to which to sell your crops, and from which to buy supplies.

If you looked at any particular plot of land and asked what factors predict if it will be colonized soon, you might see those factors stay pretty constant over time. But some key factors would depend on what other land nearby had been colonized recently. In a spatial colonization wave, there can be growth without much change in the underlying tech. Instead, the key dynamic can be that there are big time delays to allow an initial tech potential to become realized via spreading across a large landscape. A colonization wave can be growth without much tech change.

Now think about the space of job tasks as a similar sort of landscape. Tasks are adjacent to other tasks when info or objects are passed from one to the other, when they take place close in place and time, and when their details gain from being coordinated. The ease of automating each task depends on how regular and standardized are its inputs, how easy it is to formalize the info on which key choices depend, how easy it is to evaluate and judge outputs, and how simple, stable, and mild are the physical environments in which this task is done.

When the tasks near a particular task get more automated, those tasks tend more to happen in a more controlled stable environment, the relevant info tends to be more formalized, and related info and objects get simpler, more standardized, and more reliably available. And this all tends to make it easier to automate tasks nearby. Much like how land is easier to colonize when nearby land has recently been more intensely colonized.

Among the job features that predict automation in our analysis the strongest is: Pace Determined By Speed Of Equipment and Importance of Repeating Same Tasks. This one feature clearly fits my story here. Many others do as well; here is more from our paper:

Pace Determined By Speed Of Equipment picks out jobs that coordinate closely with machinery, while Importance of Repeating Same Tasks picks out jobs with many similar and independent small tasks. Variety picks out an opposite case of dissimilar tasks. The job features Wear Common Safety Equipment and Indoors Environmentally Controlled pick out tasks done in calm stable environments, where machines function better, while Hearing Sensitivity picks out less suitable complex subtle environments. In jobs with frequent Letters and Memos, such memos tend to be short and standardized. Jobs with more Advancement are “results oriented”, with more clearly measurable results. Simple machines tend to be bad at Thinking Creatively, Innovation and Mathematics. Physical Proximity picks out jobs done close to humans, usually because of needed human interactions, which tend to be complex, and where active machines could risk hurting them.

We have long been experiencing a wave of automation passing across the space of job tasks. Some of this increase in automation has been due to falling computer tech costs, improving algorithms and tools, etc. But much of it may simply be the general potential of this tech being realized via a slow steady process with a long delay: the automation of tasks near other recently automated tasks, slowly spreading across the space of tasks.

GD Star Rating
loading...
Tagged as: , ,

Why Not RFID Tag Humans?

Today, across a wide range of contexts, we consistently have rules that say that if you have a thing out there in the world that can move around and do stuff, you need to give it a visible identifier so that folks near that thing can see that identifier, look it up in a registry, and find out who owns it. That identifier might be a visible tag or ID number, it might be an RFID that responds to radio signals with its ID, or it might be capable of more complex talk protocols. We have such rules for pets, cars, trucks, boats, planes, and most recently have added such rules for drones. Most phones and tablets and other devices that communicate electronically also have such identifiers. And few seem to object to more systematic collection of ID info, such as via tag readers.

The reasoning is simple and robust. When a thing gets lost, identifiers help us get it back to its owner. If a thing might bother or hurt someone around it, we want the owner to know that we can hold them responsible for such effects. Yes, there are costs to creating and maintaining IDs and registries (RFID tags today cost ~$0.15). Also, such IDs can empower those who are hostile to you and your things (including governments) to find them and you, and to hurt you both. But we have consistently seen these costs as worth the benefits, especially as device costs have fallen dramatically over the decades.

But when it comes to your personal body, public opinion seems to quite strongly opposed:


My 14 law&econ undergrads all agreed when I assigned this topic on their final exam today. People oppose requiring identifiers, and as face readers are now on the verge of making a new ID system, many want to legally ensure a right to wear masks to thwart it.

Yet the tradeoffs seem quite similar to me; it is just the scale of the stakes that rise. When we are talking about your body, as opposed to your car, pet, or drone, you can both do more to hurt others, and folks hostile to you might try to do more to you via knowing where you are. But if the ratio of these costs and benefits favor IDs in the other cases, I find it hard to see why that ratio would switch when we get to bodies.

GD Star Rating
loading...
Tagged as: ,

Automation: So Far, Business As Usual

Since at least 2013, many have claimed that we are entering a big automation revolution, and so should soon expect to see large trend-deviating increases in job automation levels, in related job losses, and in patterns of which jobs are more automated.

For example, in the October 15 Democratic debate between 12 U.S. presidential candidates, 6 of them addressed automation concerns introduced via this moderator’s statement:

According to a recent study, about a quarter of American jobs could be lost to automation in just the next ten years.

Most revolutions do not appear suddenly or fully-formed, but instead grow from precursor trends. Thus we might hope to test this claim of an automation revolution via a broad study of recent automation.

My coauthor Keller Scholl and I have just released such a study. We use data on 1505 expert reports regarding the degree of automation of 832 U.S. job types over the period 1999-2019, and similar reports on 153 other job features, to try to address these questions:

  1. Is automation predicted by two features suggested by basic theory: pay and employment?
  2. Do expert judgements on which particular jobs are vulnerable to future automation predict which jobs were how automated in the recent past?
  3. How well can we predict each job’s recent degree of automation from all available features?
  4. Have the predictors of job automation changed noticeably over the last two decades?
  5. On average, how much have levels of job automation changed in the last two decades?
  6. Do changes in job automation over the last two decades predict changes in pay or employment for those jobs?
  7. Do other features, when interacted with automation, predict changes in pay or employment?

Bottom line: we see no signs of an automation revolution. From our paper‘s conclusion:

We find that both wages and employment predict automation in the direction predicted by simple theory. We also find that expert judgements on which jobs are more vulnerable to future automation predict which jobs have been how automated recently. Controlling for such factors, education does not seem to predict automation.

However, aside perhaps from education, these factors no longer help predict automation when we add (interpolated extensions of) the top 25 O*NET variables, which together predict over half the variance in reported automation. The strongest O*NET predictor is Pace Determined By Speed Of Equipment and most predictors seem understandable in terms of traditional mechanical styles of job automation.

We see no significant change over our time period in the average reported automation levels, or in which factors best predict those levels. However, we can’t exclude the possibility of drifting standards in expert reports; if so, automation may have increased greatly during this period. The main change that we can see is that job factors have become significantly more suitable for automation, by enough to raise automation by roughly one third of a standard deviation.

Changes in pay and employment tend to predict each other, suggesting that labor market changes tend more to be demand instead of supply changes. These changes seem weaker when automation increases. Changes in job automation do not predict changes in pay or employment; the only significant term out of six suggests that employment increases with more automation. Falling labor demand correlates with rising job education levels.

None of these results seem to offer much support for claims that we are in the midst of a trend-deviating revolution in levels of job automation, related job losses, or in the factors that predict job automation. If such a revolution has begun, it has not yet noticeably influenced this sort of data, though continued tracking of such data may later reveal such a revolution. Our results also offer little support for claims that a trend-deviating increase in automation would be accompanied by large net declines in pay or employment. Instead, we estimate that more automation mainly predicts weaker demand, relative to supply, fluctuations in labor markets.

GD Star Rating
loading...
Tagged as: , , ,

Unending Winter Is Coming

Toward the end of the TV series Game of Thrones, a big long (multi-year) winter was coming, and while everyone should have been saving up for it, they were instead spending lots to fight wars. Because when others spend on war, that forces you to spend on war, and then suffer a terrible winter. The long term future of the universe may be much like this, except that future winter will never end! Let me explain.

The key universal resource is negentropy (and time), from which all others can be gained. For a very long time almost all life has run on the negentropy in sunshine landing on Earth, but almost all of that has been spent in the fierce competition to live. The things that do accumulate, such as innovations embodied in genomes, can’t really be spent to survive. However, as sunlight varies by day and season, life does sometimes save up resources during one part of a cycle, to spend in the other part of a cycle.

Humans have been growing much more rapidly than nature, but we also have had strong competition, and have also mostly only accumulated the resources that can’t directly be spent to win our competitions. We do tend to accumulate capital in peacetime, but every so often we have a big war that burns most of that up. It is mainly our remaining people and innovations that let us rebuild.

Over the long future, our descendants will gradually get better at gaining faster and cheaper access to more resources. Instead of drawing on just the sunlight coming to Earth, we’ll take all light from the Sun, and then we’ll take apart the Sun to make engines that we better control. And so on. Some of us may even gain long term views, that prioritize the very long run.

However, it seems likely that our descendants will be unable to coordinate on universal scales to prevent war and theft. If so, then every so often we will have a huge war, at which point we may burn up most of the resources that can be easily accessed on the timescale of that war. Between such wars, we’d work to increase the rate at which we could access resources during a war. And our need to watch out for possible war will force us to continually spend a non-trivial fraction of our accessible resources watching and staying prepared for war.

The big problem is: the accessible universe is finite, and so we will only ever be able to access a finite amount of negentropy. No matter how much we innovate. While so far we’ve mainly been drawing on a small steady flow of negentropy, eventually we will get better and faster access to the entire stock. The period when we use most of that stock is our universe’s one and only “summer”, after which we face an unending winter. This implies that when a total war shows up, we are at risk of burning up large fractions of all the resources that we can quickly access. So the larger a fraction of the universe’s negentropy that we can quickly access, the larger a fraction of all resources that we will ever have that we will burn up in each total war.

And even between the wars, we will need to watch out and stay prepared for war. If one uses negentropy to do stuff slowly and carefully, then the work that one can do with a given amount of negentropy is typically proportional to the inverse of the rate at which one does that work. This is true for computers, factories, pipes, drag, and much else. So ideally, the way to do the most with a fixed pot of negentropy is to do it all very slowly. And if the universe will last forever, that seems to put no bound on how much we can eventually do.

Alas, given random errors due to cosmic rays and other fluctuations, there is probably a minimum speed for doing the most with some negentropy. So the amount we can eventually do may be big, but it remains finite. However, that optimal pace is probably many orders of magnitude slower than our current speeds, letting our descendants do a lot.

The problem is, descendants who go maximally slow will make themselves very vulnerable to invasion and theft. For an analogy, imagine how severe our site security problems would be today if any one person could temporarily “grow” and become as powerful as a thousand people, but only after a one hour delay. Any one intruder to some site who grew while onsite this could wreck havoc and then be gone within an hour, before local security forces could grow to respond. Similarly when most future descendants run very slow, one who suddenly chose to run very fast might have a huge outside influence before the others could effectively respond.

So the bottom line is that if war and theft remain possible for our descendants, the rate at which they do things will be much faster than the much slower most efficient speed. In order to adequately watch out for and respond to attacks, they will have to run fast, and thus more quickly use up their available stocks of resources, such as stars. And when their stocks run out, the future will have run out for them. Like in a Game of Thrones scenario after a long winter war, they would then starve.

Now it is possible that there will be future resources that simply cannot be exploited quickly. Such as perhaps big black holes. In this case some of our descendants could last for a very long time slowly sipping on such supplies. But their activity levels at that point would be much lower than their rates before they used up all the other faster-access resources.

Okay, let’s put this all together into a picture of the long term future. Today we are growing fast, and getting better at accessing more kinds of resources faster. Eventually our growth in resource use will reach a peak. At that point we will use resources much faster than today, and also much faster than what would be the most efficient rate if we could all coordinate to prevent war and theft. Maybe a billion times faster or more. Fearing war, we will keep spending to watch and prepare for war, and then every once in a while we would burn up most accessible resources in a big war. After using up faster access resources, we then switch to lower activity levels using resources that we just can’t extract as fast, no matter how clever we are. Then we use up each one of those much faster than optimal, with activity levels falling after each source is used up.

That is, unless we can prevent war and theft, our long term future is an unending winter, wherein we use up most of our resources in early winter wars, and then slowly die and shrink and slow and war as the winter continues, on to infinity. And as a result do much less than we could have otherwise; perhaps a billion times less or more. (Thought still vastly more than we have done so far.) And this is all if we are lucky enough to avoid existential risk, which might destroy it all prematurely, leading instead to a fully-dead empty eternity.

Happy holidays.

GD Star Rating
loading...
Tagged as: , ,

Designing Crime Bounties

I’ve been thinking about how to design a bounty system for enforcing criminal law. It is turning out to be a bit more complex than I’d anticipated, so I thought I’d try to open up this design process, by telling you of key design considerations, and inviting your suggestions.

The basic idea is to post bounties, paid to the first hunter to convince a court that a particular party is guilty of a particular crime. In general that bounty might be paid by many parties, including the government, though I have in mind a vouching system, wherein the criminal’s voucher pays a fine, and part of that goes to pay a bounty. 

Here are some key concerns:

  1. There needs to be a budget to pay bounties to hunters.
  2. We don’t want criminals to secretly pay hunters to not prosecute their crimes.
  3. We may not want the chance of catching each crime to depend lots on one hunter’s random ability. 
  4. We want incentives to adapt, i.e., use the most cost-effective hunter for each particular case. 
  5. We want incentives to innovate, i.e., develop more cost-effective ways to hunt over time. 
  6. First hunter allowed to see a crime scene, or do an autopsy, etc., may mess it up for other hunters. 
  7. We may want suspects to have a right against double jeopardy, so they can only be prosecuted once.
  8. Giving many hunters extra rights to penetrate privacy shields may greatly reduce effective privacy.
  9. It may be a waste of time and money for several hunters to simultaneously pursue the same crime. 
  10. Witnesses may chafe at having to be interviewed by several hunters re the same events.

In typical ancient legal systems, a case would start with a victim complaint. The victim, with help from associates, would then pick a hunter, and pay that hunter to find and convict the guilty. The ability to sell the convicted into slavery and to get payment from their families helped with 1, but we no longer allow these, making this system problematic. Which is part of why we’ve added our current system. Victims have incentives to address 2-4, though they might not have sufficient expertise to choose well. Good victim choices give hunters incentive to address 5. The fact that victims picked particular hunters helped with 6-10. 

The usual current solution is to have a centrally-run government organization. Cases start via citizen complaints and employee patrols. Detectives are then assigned mostly at random to particular local cases. If an investigation succeeds enough, the case is given to a random local prosecutor. Using government funds helps with 1, and selecting high quality personnel helps somewhat with 3. Assigning particular people to particular cases helps with 6-10.  Choosing people at random, heavy monitoring, and strong penalties for corruption can help with 2. This system doesn’t do so well on issues 4-5. 

The simplest way to create a bounty system is to just authorize a free-for-all, allowing many hunters to pursue each crime. The competition helps with 2-5, but having many possible hunters per crime hurts on issues 6-10. One way to address this is to make one hunter the primary hunter for each crime, the only one allowed any special access and the only one who can prosecute it. But there needs to be a competition for this role, if we are to deal well with 3-5.

One simple way to have a competition for the role of primary hunter of a crime is an initial auction; the hunter who pays the most gets it. At least this makes sense when a crime is reported by some other party. If a hunter is the one to notice a crime, it may make more sense for that hunter to get that primary role. The primary hunter might then sell that role to some other hunter, at which time they’d transfer the relevant evidence they’ve collected. (Harberger taxes might ease such transfers.)

Profit-driven hunters help deal with 3-5, but problem 2 is big if selling out to the criminal becomes the profit-maximizing strategy. That gets especially tempting when the fine that the criminal pays (or the equivalent punishment) is much more than the bounty that the hunter receives. One obvious solution is to make such payoffs a crime, and to reduce hunter privacy in order to allow other hunters to find and prosecute violations. But is that enough?

Another possible solution is to have the primary hunter role expire after a time limit, if that hunter has not formally prosecuted someone by then. The role could then be re-auctioned. This might need to be paired with penalties for making overly weak prosecutions, such as loser-pays on court costs. And the time delay might make the case much harder to pursue.

I worry enough about issue 2 that I’m still looking for other solutions. One quite different solution is to use decision markets to assign the role of primary hunter for a case. Using decision markets that estimate expected fines recovered would push hunters to accumulate track records showing high fine recovery rates. 

Being paid by criminals to ignore crimes would hurt such track records, and thus such corruption would be discouraged. This approach could rely less on making such payoffs illegal and on reduced hunter privacy. 

The initial hunter assignment could be made via decision markets, and at any later time that primary role might be transferred if a challenger could show a higher expected fine recovery rate, conditional on their becoming primary. It might make sense to require the old hunter to give this new primary hunter access to the evidence they’ve collected so far. 

This is as far as my thoughts have gone at the moment. The available approaches seem okay, and probably better than what we are doing now. But maybe there’s something even better that you can suggest, or that I will think of later. 

GD Star Rating
loading...
Tagged as: , ,

Social Roles Make Sense

The modern world relies greatly on a vast division of labor, wherein we each do quite different tasks. Partially as a result, we live in different places, have different lifestyles, and associate with different people. The ancient world also had a division of labor, but in addition to doing different tasks, people tended to have expectations about what kinds of people would tend to do what kinds of tasks, live where, and associate with who. Often strong expectations. Such expectations can be called social “roles”.

For example, in a society with “gender roles”, there are widely shared expectations regarding the kinds of tasks that women do, relative to men. In some societies these expectations have been so strong that all women were strongly and directly prevented from doing any other tasks. But more commonly, expectations could often be violated, if one paid a sufficient price. Similarly, ancient societies often had roles related to family, ethnicity, class, age, body plan, personality, and geographic location. People who started life with particular values of these parameters were channeled into particular tasks, places, training regimes, and associations, choices that tended to support their doing particular future tasks, with matching lifestyles, associations, etc.

When there is an existing pattern of what sorts of people tend to do what tasks and fill what social slots, then it is natural and cost-reducing to at least weakly use those patterns to predict what sorts of people will do well at what tasks in the near future. Furthermore, it is natural and cost-reducing to at least weakly use future task expectations to decide the locations, training, associations, etc., of people earlier in life.

It seems obvious to me that it is possible to have both overly weak and overly strong social roles. With overly strong social roles, we rely too much on initial expectations, experiment too little with alternate allocations, and act too little on any info we acquire about people as their lives progress. But with overly weak social roles, we rely too little on easily accessible info on what sorts of people are likely to end up well-suited to particular roles.

For example, consider climate roles. If you grow up in a particular climate, there’s a better than random chance that you will live in a similar climate when you are older. So it makes sense early in life for you to adapt to that climate in your habits and attitudes. When people are looking later for someone to live or work in that climate, it makes sense for them to prefer people already experienced with that climate. Part of this could be genetic, in that people with genes well suited to a climate may have been previously preferentially selected to live there. But it mostly doesn’t matter the cause; it just makes sense to respond to these patterns in the obvious way.

(Yes, sometimes one will want to pick people who seem especially badly-matched to certain tasks or context, just to experiment and check one’s assumptions about matching. But such experiments are unusual as choices.)

Of course the world may sometimes stumble into inefficient equilibria, wherein we keep tending to assign certain sorts of people to certain tasks, when we’d be even better off with some other pattern of who does what. In such cases we might try to break out of previous patterns, in part via discouraging people from using some features as cues to assigning some aspects of tasks, locations, associations, etc. This is one possible justification for “anti-discrimination” rules and laws.

But this certainly doesn’t justify a general prohibition on any sorts of social roles whatsoever. And any decisions based on theories saying that we were in inefficient equilibria should be periodically re-examined, to see if observed patterns of who seems to be good at what support such theories. We might have been mistaken. And unless there is some market failure that we must continually fight against, we should expect to need anti-discrimination rules only for a limited time, until new and better equilibria can be reached.

Yes, among the features that we can use to estimate who is fit for what roles, some of those features are easier for individuals to change, while others are harder to change. However, it isn’t clear why this distinction matters that much re the suitability of such features for task assignment. Even when features can change, there will be a cost of such changes, and so it will often be more cost-effective to use people who already have the suitable features, instead of getting other people to change to become suitable.

From a conversation with John Nye.

GD Star Rating
loading...
Tagged as: ,

What Info Is Verifiable?

For econ topics where info is relevant, including key areas of mechanism design, and law & econ, we often make use of a key distinction: verifiable versus unverifiable info. For example, we might say that whether it rains in your city tomorrow is verifiable, but whether you feel discouraged tomorrow is not verifiable. 

Verifiable info can much more easily be the basis of a contract or a legal decision. You can insure yourself against rain, but not discouragement, because insurance contracts can refer to the rain, and courts can enforce those contract terms. And as courts can also enforce bets about rain, prediction markets can incentivize accurate forecasts on rain. Without that, you have to resort to the sort of mechanisms I discussed in my last post. 

Often, traffic police can officially pull over a car only if they have a verifiable reason to think some wrong has been done, but not if they just have a hunch. In the blockchain world, things that are directly visible on the blockchain are seen as verifiable, and thus can be included in smart contracts. However, blockchain folks struggle to make “oracles” that might allow other info to be verifiable, including most info that ordinary courts now consider to be verifiable. 

Wikipedia is a powerful source of organized info, but only info that is pretty directly verifiable, via cites to other sources. The larger world of media and academia can say many more things, via its looser and more inclusive concepts of “verifiable”. Of course once something is said in those worlds, it can then be said on Wikipedia via citing those other sources.

I’m eager to reform many social institutions more in the direction of paying for results. But these efforts are limited by the kinds of results that can be verified, and thus become the basis of pay-for-results contracts. In mechanism design, it is well known that it is much easier to design mechanisms that get people to reveal and act on verifiable info. So the long term potential for dramatic institution gains may depend crucially on how much info can be made verifiable. The coming hypocralypse may result from the potential to make widely available info into verifiable info. More direct mind-reading tech might have a similar effect. 

Given all this reliance on the concept of verifiability, it is worth noting that verifiability seems to be a social construct. Info exists in the universe, and the universe may even be made out of info, but this concept of verifiability seems to be more about when you can get people to agree on a piece of info. When you can reliably ask many difference sources and they will all confidently tell you the same answer, we tend to treat that as verifiable. (Verifiability is related to whether info is “common knowledge” or “common belief”, but the concepts don’t seem to be quite the same.)

It is a deep and difficult question what actually makes info verifiable. Sometimes when we ask the same question to many people, they will coordinate to tell us the answer that we or someone wants to hear, or will punish them for contradicting. But at other times when we ask many people the same question, it seems like their best strategy is just to look directly at the “truth” and report that. Perhaps because they find it too hard to coordinate, or because implicit threats are weak or ambiguous. 

The question of what is verifiable opens an important meta question: how can can we verify claims of verifiability? For example, a totalitarian regime might well insist not only that everyone agree that the regime is fair and kind, a force for good, but that they agree that these facts are clear and verifiable. Most any community with a dogma may be tempted to claim not only that their dogma is true, but also that it is verifiable. This can allow such dogma to be the basis for settling contract disputes or other court rulings, such as re crimes of sedition or treason.

I don’t have a clear theory or hypothesis to offer here, but while this was in my head I wanted to highlight the importance of this topic, and its apparent openness to investigation. While I have no current plans to study this, it seems quite amenable to study now, at least by folks who understand enough of both game theory and a wide range of social phenomena.  

Added 3Dec: Here is a recent paper on how easy mechanisms get when info is verifiable.

GD Star Rating
loading...
Tagged as: , ,

A New Truth Mechanism

Early in 2017 I reported:

This week Nature published some empirical data on a surprising-popularity consensus mechanism. The idea is to ask people to pick from several options, and also to have each person forecast the distribution of opinion among others. … Compared to prediction markets, this mechanism doesn’t require that those who run the mechanism actually know the truth later. … The big problem … however, is that it requires that learning the truth be the cheapest way to coordinate opinion. …. I can see variations on [this method] being used much more widely to generate standard safe answers that people can adopt with less fear of seeming strange or ignorant. But those who actually want to find true answers even when such answers are contrarian, they will need something closer to prediction markets.

In a new mechanism by Yuqing Kong, N agents simultaneously and without communication give answers to T questions, each of which has C possible answers. The clues that agents have about each question can be arbitrarily correlated, and agents can have differing priors about that clue distribution. However, clues must be identically and independently distributed (IID) across questions. If T ≥ 2C and N ≥ 2, then in this new mechanism telling the “truth” (i.e., answer indicated by clue) is a dominant strategy, with a strictly higher payoff if anyone else also tells the truth!

This is a substantial advance over the prior literature, and I expect future mechanisms to weaken the IID across questions constraint. Alas, even so this seems to suffer for the same key problem of needing truth to be the cheapest way for respondents to coordinate answers. I expect this problem to be much harder to overcome.

Of course if you add “truth speakers” as some of the agents, and wait for those speakers’ input before paying the other participants, you get something much closer to a prediction market.

GD Star Rating
loading...
Tagged as:

Occam’s Policy Razor

Nine experiments provide support for promiscuous condemnation: the general tendency to assume that ambiguous actions are immoral. Both cognitive and functional arguments support the idea of promiscuous condemnation. (More)

The world is full of inefficient policies. But why, when many can can simply and clearly explain why such policies are inefficient? The following concrete example suggests a simple explanation:

Logically, it doesn’t seem cruel to offer someone an extra option, if you don’t thereby change their other options. Two thirds of poll respondents agree re this prisoner case. However, 94% also think that the world media would roast any nation who did this, and they’d get away with it. And I agree with these poll respondents in both cases.

Most of the audience of that world media would not be paying close attention, and would not care greatly about truth. They would instead make a quick and shallow calculation: will many find this accusation innately plausible and incendiary enough to stick, and would I like that? If they answer is yes, they add their pitchforks to the mob. That’s the sort of thing I’ve seen with internet mobs lately, and also with prior media mobs.

As most of the world is eager to call the United States an evil empire driven by evil intent, any concrete U.S. support for torture might plausibly be taken as evidence for such evil intent, at least to observers who aren’t paying much attention. So even those who know that in such cases allowing torture can be better policy would avoid supporting it. Add in large U.S. mobs who are also not paying attention, and who might like to accuse U.S. powers of ill intent, and we get our situation where almost no one is willing to seriously suggest that we offer torture substitutes for prison. Even though that would help.

Similar theories can explain many other inefficient policies, such as laws against prostitution, gambling, and recreational drugs. We might know that such policies are ineffective and harmful, and yet not be able to bring ourselves to publicly support ending such bans, for fear of being accused of bad intent. This account might even explain policies to punish the rich, big business, and foreigners. The more that contrary policies could be spun to distracted observers as showing evil intent, the more likely such inefficient policies are to be adopted.

Is there any solution? Consider the example of Congress creating a commission to recommend which U.S. military bases to close, where afterward Congress could only approve or reject the whole list, without making changes. While bills to close individual bases would have been met with fierce difficult-to-overcome opposition, this way to package base closings into a bundle allowed Congress to actually close many inefficient bases.

Also consider how a nation can resist international pressure to imprison one disliked person, or to censor one disliked book. In the first case the nation may plead “we follow a general rule of law, and our law has not yet convicted this person”, while in the second case the nation may plead “We have adopted a general policy of free speech, which limits our ability to ban individual books.”

I see a pattern here: simpler policy spaces, with fewer degrees of freedom, are safer from bias, corruption, special-pleading, and selfish lobbying. A political system choosing from a smaller space of possible policies that will then apply to a large range of situations seems to make more efficient choices.

Think of this as Occam’s Policy Razor. In science, Occam’s Theory Razor says to pick the simplest theory that can fit the data. Doing this can help fractious scientific communities to avoid bias and favoritism in theory choice. Similarly, Occam’s Policy Razor says to limit policy choices to the smallest space of policies which can address the key problems for which policies are needed. More complexity to address complex situation details is mostly not worth the risk. This policy razor may help fractious political communities to avoid bias and favoritism in policy choice.

Yes, I haven’t formalized this much, and this is still a pretty sloppy analysis. And yes, there are in fact many strong criticisms of Occum’s Razor in science. Even so, it feels like there may be something to this. And futarchy seems to me a good example of this principle. In a futarchy with a simple value function based on basic outcomes like population, health, and wealth, then voting on values but betting on beliefs would probably mostly legalize things like prostitution, gambling, recreational drugs, immigration, and big business. It would probably even let prisoners pick torture.

Today we resist world mob disapproval regarding particular people we don’t jail, or particular books we don’t ban, by saying “Look we have worked out general systems to deal with such things, and it isn’t safe for us to give some folks discretion to make exceptions just because a mob somewhere yells”. Under futarchy, we might similarly resist world disapproval of our prostitution, etc. legalization by saying:

Look, we have chosen a simple general system to deal with such things, and we can’t trust giving folks discretion make policy exceptions just because talking heads somewhere scowl. So far our system hasn’t banned those things, and if you don’t like that outcome then participate in our simple general system, to see if you can get your desired changes by working through channels.

By limiting ourselves to simple general choices, we might also tend to make more efficient choices, to our overall benefit.

GD Star Rating
loading...
Tagged as: ,

Prestige Blocks Reform

At several recent conferences, I suggested to the organizers that I talk about social institution innovation, but they preferred I talk about my tech related work (or not talk at all). At those events they did have other people talk about social reforms and innovations, and all those speakers were relatively high status people with a background in “hard” sciences (e.g., physics or computers science). And to my eyes, their suggestions and analysis were amateurish.

Curious about this pattern, I did these Twitter polls:

So while more of us would rather hear about social analysis from a social expert, more of us would rather hear about social reform proposals from prestigious hard scientists. This makes sense if we see reform as a social coordination game: if we only want to support reforms that we expect to be supported by many high status folks, we need a high status advocates to be our focal point to get the ball rolling.

Alas, since hard scientists tend to know little social science and to think little of social scientists, the reforms they suggest tend to be low quality, at least by social scientist standards. Furthermore, since prestige-driven social systems have done well for them personally, and are said to do well in running their hard science world, they will tend to promote such systems as reforms. Alas, as I think replacing such systems should be one of our main social reform priorities.

GD Star Rating
loading...
Tagged as: ,