Author Archives: Robin Hanson

Automation: So Far, Business As Usual

Since at least 2013, many have claimed that we are entering a big automation revolution, and so should soon expect to see large trend-deviating increases in job automation levels, in related job losses, and in patterns of which jobs are more automated.

For example, in the October 15 Democratic debate between 12 U.S. presidential candidates, 6 of them addressed automation concerns introduced via this moderator’s statement:

According to a recent study, about a quarter of American jobs could be lost to automation in just the next ten years.

Most revolutions do not appear suddenly or fully-formed, but instead grow from precursor trends. Thus we might hope to test this claim of an automation revolution via a broad study of recent automation.

My coauthor Keller Scholl and I have just released such a study. We use data on 1505 expert reports regarding the degree of automation of 832 U.S. job types over the period 1999-2019, and similar reports on 153 other job features, to try to address these questions:

  1. Is automation predicted by two features suggested by basic theory: pay and employment?
  2. Do expert judgements on which particular jobs are vulnerable to future automation predict which jobs were how automated in the recent past?
  3. How well can we predict each job’s recent degree of automation from all available features?
  4. Have the predictors of job automation changed noticeably over the last two decades?
  5. On average, how much have levels of job automation changed in the last two decades?
  6. Do changes in job automation over the last two decades predict changes in pay or employment for those jobs?
  7. Do other features, when interacted with automation, predict changes in pay or employment?

Bottom line: we see no signs of an automation revolution. From our paper‘s conclusion:

We find that both wages and employment predict automation in the direction predicted by simple theory. We also find that expert judgements on which jobs are more vulnerable to future automation predict which jobs have been how automated recently. Controlling for such factors, education does not seem to predict automation.

However, aside perhaps from education, these factors no longer help predict automation when we add (interpolated extensions of) the top 25 O*NET variables, which together predict over half the variance in reported automation. The strongest O*NET predictor is Pace Determined By Speed Of Equipment and most predictors seem understandable in terms of traditional mechanical styles of job automation.

We see no significant change over our time period in the average reported automation levels, or in which factors best predict those levels. However, we can’t exclude the possibility of drifting standards in expert reports; if so, automation may have increased greatly during this period. The main change that we can see is that job factors have become significantly more suitable for automation, by enough to raise automation by roughly one third of a standard deviation.

Changes in pay and employment tend to predict each other, suggesting that labor market changes tend more to be demand instead of supply changes. These changes seem weaker when automation increases. Changes in job automation do not predict changes in pay or employment; the only significant term out of six suggests that employment increases with more automation. Falling labor demand correlates with rising job education levels.

None of these results seem to offer much support for claims that we are in the midst of a trend-deviating revolution in levels of job automation, related job losses, or in the factors that predict job automation. If such a revolution has begun, it has not yet noticeably influenced this sort of data, though continued tracking of such data may later reveal such a revolution. Our results also offer little support for claims that a trend-deviating increase in automation would be accompanied by large net declines in pay or employment. Instead, we estimate that more automation mainly predicts weaker demand, relative to supply, fluctuations in labor markets.

GD Star Rating
loading...
Tagged as: , , ,

Unending Winter Is Coming

Toward the end of the TV series Game of Thrones, a big long (multi-year) winter was coming, and while everyone should have been saving up for it, they were instead spending lots to fight wars. Because when others spend on war, that forces you to spend on war, and then suffer a terrible winter. The long term future of the universe may be much like this, except that future winter will never end! Let me explain.

The key universal resource is negentropy (and time), from which all others can be gained. For a very long time almost all life has run on the negentropy in sunshine landing on Earth, but almost all of that has been spent in the fierce competition to live. The things that do accumulate, such as innovations embodied in genomes, can’t really be spent to survive. However, as sunlight varies by day and season, life does sometimes save up resources during one part of a cycle, to spend in the other part of a cycle.

Humans have been growing much more rapidly than nature, but we also have had strong competition, and have also mostly only accumulated the resources that can’t directly be spent to win our competitions. We do tend to accumulate capital in peacetime, but every so often we have a big war that burns most of that up. It is mainly our remaining people and innovations that let us rebuild.

Over the long future, our descendants will gradually get better at gaining faster and cheaper access to more resources. Instead of drawing on just the sunlight coming to Earth, we’ll take all light from the Sun, and then we’ll take apart the Sun to make engines that we better control. And so on. Some of us may even gain long term views, that prioritize the very long run.

However, it seems likely that our descendants will be unable to coordinate on universal scales to prevent war and theft. If so, then every so often we will have a huge war, at which point we may burn up most of the resources that can be easily accessed on the timescale of that war. Between such wars, we’d work to increase the rate at which we could access resources during a war. And our need to watch out for possible war will force us to continually spend a non-trivial fraction of our accessible resources watching and staying prepared for war.

The big problem is: the accessible universe is finite, and so we will only ever be able to access a finite amount of negentropy. No matter how much we innovate. While so far we’ve mainly been drawing on a small steady flow of negentropy, eventually we will get better and faster access to the entire stock. The period when we use most of that stock is our universe’s one and only “summer”, after which we face an unending winter. This implies that when a total war shows up, we are at risk of burning up large fractions of all the resources that we can quickly access. So the larger a fraction of the universe’s negentropy that we can quickly access, the larger a fraction of all resources that we will ever have that we will burn up in each total war.

And even between the wars, we will need to watch out and stay prepared for war. If one uses negentropy to do stuff slowly and carefully, then the work that one can do with a given amount of negentropy is typically proportional to the inverse of the rate at which one does that work. This is true for computers, factories, pipes, drag, and much else. So ideally, the way to do the most with a fixed pot of negentropy is to do it all very slowly. And if the universe will last forever, that seems to put no bound on how much we can eventually do.

Alas, given random errors due to cosmic rays and other fluctuations, there is probably a minimum speed for doing the most with some negentropy. So the amount we can eventually do may be big, but it remains finite. However, that optimal pace is probably many orders of magnitude slower than our current speeds, letting our descendants do a lot.

The problem is, descendants who go maximally slow will make themselves very vulnerable to invasion and theft. For an analogy, imagine how severe our site security problems would be today if any one person could temporarily “grow” and become as powerful as a thousand people, but only after a one hour delay. Any one intruder to some site who grew while onsite this could wreck havoc and then be gone within an hour, before local security forces could grow to respond. Similarly when most future descendants run very slow, one who suddenly chose to run very fast might have a huge outside influence before the others could effectively respond.

So the bottom line is that if war and theft remain possible for our descendants, the rate at which they do things will be much faster than the much slower most efficient speed. In order to adequately watch out for and respond to attacks, they will have to run fast, and thus more quickly use up their available stocks of resources, such as stars. And when their stocks run out, the future will have run out for them. Like in a Game of Thrones scenario after a long winter war, they would then starve.

Now it is possible that there will be future resources that simply cannot be exploited quickly. Such as perhaps big black holes. In this case some of our descendants could last for a very long time slowly sipping on such supplies. But their activity levels at that point would be much lower than their rates before they used up all the other faster-access resources.

Okay, let’s put this all together into a picture of the long term future. Today we are growing fast, and getting better at accessing more kinds of resources faster. Eventually our growth in resource use will reach a peak. At that point we will use resources much faster than today, and also much faster than what would be the most efficient rate if we could all coordinate to prevent war and theft. Maybe a billion times faster or more. Fearing war, we will keep spending to watch and prepare for war, and then every once in a while we would burn up most accessible resources in a big war. After using up faster access resources, we then switch to lower activity levels using resources that we just can’t extract as fast, no matter how clever we are. Then we use up each one of those much faster than optimal, with activity levels falling after each source is used up.

That is, unless we can prevent war and theft, our long term future is an unending winter, wherein we use up most of our resources in early winter wars, and then slowly die and shrink and slow and war as the winter continues, on to infinity. And as a result do much less than we could have otherwise; perhaps a billion times less or more. (Thought still vastly more than we have done so far.) And this is all if we are lucky enough to avoid existential risk, which might destroy it all prematurely, leading instead to a fully-dead empty eternity.

Happy holidays.

GD Star Rating
loading...
Tagged as: , ,

Designing Crime Bounties

I’ve been thinking about how to design a bounty system for enforcing criminal law. It is turning out to be a bit more complex than I’d anticipated, so I thought I’d try to open up this design process, by telling you of key design considerations, and inviting your suggestions.

The basic idea is to post bounties, paid to the first hunter to convince a court that a particular party is guilty of a particular crime. In general that bounty might be paid by many parties, including the government, though I have in mind a vouching system, wherein the criminal’s voucher pays a fine, and part of that goes to pay a bounty. 

Here are some key concerns:

  1. There needs to be a budget to pay bounties to hunters.
  2. We don’t want criminals to secretly pay hunters to not prosecute their crimes.
  3. We may not want the chance of catching each crime to depend lots on one hunter’s random ability. 
  4. We want incentives to adapt, i.e., use the most cost-effective hunter for each particular case. 
  5. We want incentives to innovate, i.e., develop more cost-effective ways to hunt over time. 
  6. First hunter allowed to see a crime scene, or do an autopsy, etc., may mess it up for other hunters. 
  7. We may want suspects to have a right against double jeopardy, so they can only be prosecuted once.
  8. Giving many hunters extra rights to penetrate privacy shields may greatly reduce effective privacy.
  9. It may be a waste of time and money for several hunters to simultaneously pursue the same crime. 
  10. Witnesses may chafe at having to be interviewed by several hunters re the same events.

In typical ancient legal systems, a case would start with a victim complaint. The victim, with help from associates, would then pick a hunter, and pay that hunter to find and convict the guilty. The ability to sell the convicted into slavery and to get payment from their families helped with 1, but we no longer allow these, making this system problematic. Which is part of why we’ve added our current system. Victims have incentives to address 2-4, though they might not have sufficient expertise to choose well. Good victim choices give hunters incentive to address 5. The fact that victims picked particular hunters helped with 6-10. 

The usual current solution is to have a centrally-run government organization. Cases start via citizen complaints and employee patrols. Detectives are then assigned mostly at random to particular local cases. If an investigation succeeds enough, the case is given to a random local prosecutor. Using government funds helps with 1, and selecting high quality personnel helps somewhat with 3. Assigning particular people to particular cases helps with 6-10.  Choosing people at random, heavy monitoring, and strong penalties for corruption can help with 2. This system doesn’t do so well on issues 4-5. 

The simplest way to create a bounty system is to just authorize a free-for-all, allowing many hunters to pursue each crime. The competition helps with 2-5, but having many possible hunters per crime hurts on issues 6-10. One way to address this is to make one hunter the primary hunter for each crime, the only one allowed any special access and the only one who can prosecute it. But there needs to be a competition for this role, if we are to deal well with 3-5.

One simple way to have a competition for the role of primary hunter of a crime is an initial auction; the hunter who pays the most gets it. At least this makes sense when a crime is reported by some other party. If a hunter is the one to notice a crime, it may make more sense for that hunter to get that primary role. The primary hunter might then sell that role to some other hunter, at which time they’d transfer the relevant evidence they’ve collected. (Harberger taxes might ease such transfers.)

Profit-driven hunters help deal with 3-5, but problem 2 is big if selling out to the criminal becomes the profit-maximizing strategy. That gets especially tempting when the fine that the criminal pays (or the equivalent punishment) is much more than the bounty that the hunter receives. One obvious solution is to make such payoffs a crime, and to reduce hunter privacy in order to allow other hunters to find and prosecute violations. But is that enough?

Another possible solution is to have the primary hunter role expire after a time limit, if that hunter has not formally prosecuted someone by then. The role could then be re-auctioned. This might need to be paired with penalties for making overly weak prosecutions, such as loser-pays on court costs. And the time delay might make the case much harder to pursue.

I worry enough about issue 2 that I’m still looking for other solutions. One quite different solution is to use decision markets to assign the role of primary hunter for a case. Using decision markets that estimate expected fines recovered would push hunters to accumulate track records showing high fine recovery rates. 

Being paid by criminals to ignore crimes would hurt such track records, and thus such corruption would be discouraged. This approach could rely less on making such payoffs illegal and on reduced hunter privacy. 

The initial hunter assignment could be made via decision markets, and at any later time that primary role might be transferred if a challenger could show a higher expected fine recovery rate, conditional on their becoming primary. It might make sense to require the old hunter to give this new primary hunter access to the evidence they’ve collected so far. 

This is as far as my thoughts have gone at the moment. The available approaches seem okay, and probably better than what we are doing now. But maybe there’s something even better that you can suggest, or that I will think of later. 

GD Star Rating
loading...
Tagged as: , ,

Social Roles Make Sense

The modern world relies greatly on a vast division of labor, wherein we each do quite different tasks. Partially as a result, we live in different places, have different lifestyles, and associate with different people. The ancient world also had a division of labor, but in addition to doing different tasks, people tended to have expectations about what kinds of people would tend to do what kinds of tasks, live where, and associate with who. Often strong expectations. Such expectations can be called social “roles”.

For example, in a society with “gender roles”, there are widely shared expectations regarding the kinds of tasks that women do, relative to men. In some societies these expectations have been so strong that all women were strongly and directly prevented from doing any other tasks. But more commonly, expectations could often be violated, if one paid a sufficient price. Similarly, ancient societies often had roles related to family, ethnicity, class, age, body plan, personality, and geographic location. People who started life with particular values of these parameters were channeled into particular tasks, places, training regimes, and associations, choices that tended to support their doing particular future tasks, with matching lifestyles, associations, etc.

When there is an existing pattern of what sorts of people tend to do what tasks and fill what social slots, then it is natural and cost-reducing to at least weakly use those patterns to predict what sorts of people will do well at what tasks in the near future. Furthermore, it is natural and cost-reducing to at least weakly use future task expectations to decide the locations, training, associations, etc., of people earlier in life.

It seems obvious to me that it is possible to have both overly weak and overly strong social roles. With overly strong social roles, we rely too much on initial expectations, experiment too little with alternate allocations, and act too little on any info we acquire about people as their lives progress. But with overly weak social roles, we rely too little on easily accessible info on what sorts of people are likely to end up well-suited to particular roles.

For example, consider climate roles. If you grow up in a particular climate, there’s a better than random chance that you will live in a similar climate when you are older. So it makes sense early in life for you to adapt to that climate in your habits and attitudes. When people are looking later for someone to live or work in that climate, it makes sense for them to prefer people already experienced with that climate. Part of this could be genetic, in that people with genes well suited to a climate may have been previously preferentially selected to live there. But it mostly doesn’t matter the cause; it just makes sense to respond to these patterns in the obvious way.

(Yes, sometimes one will want to pick people who seem especially badly-matched to certain tasks or context, just to experiment and check one’s assumptions about matching. But such experiments are unusual as choices.)

Of course the world may sometimes stumble into inefficient equilibria, wherein we keep tending to assign certain sorts of people to certain tasks, when we’d be even better off with some other pattern of who does what. In such cases we might try to break out of previous patterns, in part via discouraging people from using some features as cues to assigning some aspects of tasks, locations, associations, etc. This is one possible justification for “anti-discrimination” rules and laws.

But this certainly doesn’t justify a general prohibition on any sorts of social roles whatsoever. And any decisions based on theories saying that we were in inefficient equilibria should be periodically re-examined, to see if observed patterns of who seems to be good at what support such theories. We might have been mistaken. And unless there is some market failure that we must continually fight against, we should expect to need anti-discrimination rules only for a limited time, until new and better equilibria can be reached.

Yes, among the features that we can use to estimate who is fit for what roles, some of those features are easier for individuals to change, while others are harder to change. However, it isn’t clear why this distinction matters that much re the suitability of such features for task assignment. Even when features can change, there will be a cost of such changes, and so it will often be more cost-effective to use people who already have the suitable features, instead of getting other people to change to become suitable.

From a conversation with John Nye.

GD Star Rating
loading...
Tagged as: ,

What Info Is Verifiable?

For econ topics where info is relevant, including key areas of mechanism design, and law & econ, we often make use of a key distinction: verifiable versus unverifiable info. For example, we might say that whether it rains in your city tomorrow is verifiable, but whether you feel discouraged tomorrow is not verifiable. 

Verifiable info can much more easily be the basis of a contract or a legal decision. You can insure yourself against rain, but not discouragement, because insurance contracts can refer to the rain, and courts can enforce those contract terms. And as courts can also enforce bets about rain, prediction markets can incentivize accurate forecasts on rain. Without that, you have to resort to the sort of mechanisms I discussed in my last post. 

Often, traffic police can officially pull over a car only if they have a verifiable reason to think some wrong has been done, but not if they just have a hunch. In the blockchain world, things that are directly visible on the blockchain are seen as verifiable, and thus can be included in smart contracts. However, blockchain folks struggle to make “oracles” that might allow other info to be verifiable, including most info that ordinary courts now consider to be verifiable. 

Wikipedia is a powerful source of organized info, but only info that is pretty directly verifiable, via cites to other sources. The larger world of media and academia can say many more things, via its looser and more inclusive concepts of “verifiable”. Of course once something is said in those worlds, it can then be said on Wikipedia via citing those other sources.

I’m eager to reform many social institutions more in the direction of paying for results. But these efforts are limited by the kinds of results that can be verified, and thus become the basis of pay-for-results contracts. In mechanism design, it is well known that it is much easier to design mechanisms that get people to reveal and act on verifiable info. So the long term potential for dramatic institution gains may depend crucially on how much info can be made verifiable. The coming hypocralypse may result from the potential to make widely available info into verifiable info. More direct mind-reading tech might have a similar effect. 

Given all this reliance on the concept of verifiability, it is worth noting that verifiability seems to be a social construct. Info exists in the universe, and the universe may even be made out of info, but this concept of verifiability seems to be more about when you can get people to agree on a piece of info. When you can reliably ask many difference sources and they will all confidently tell you the same answer, we tend to treat that as verifiable. (Verifiability is related to whether info is “common knowledge” or “common belief”, but the concepts don’t seem to be quite the same.)

It is a deep and difficult question what actually makes info verifiable. Sometimes when we ask the same question to many people, they will coordinate to tell us the answer that we or someone wants to hear, or will punish them for contradicting. But at other times when we ask many people the same question, it seems like their best strategy is just to look directly at the “truth” and report that. Perhaps because they find it too hard to coordinate, or because implicit threats are weak or ambiguous. 

The question of what is verifiable opens an important meta question: how can can we verify claims of verifiability? For example, a totalitarian regime might well insist not only that everyone agree that the regime is fair and kind, a force for good, but that they agree that these facts are clear and verifiable. Most any community with a dogma may be tempted to claim not only that their dogma is true, but also that it is verifiable. This can allow such dogma to be the basis for settling contract disputes or other court rulings, such as re crimes of sedition or treason.

I don’t have a clear theory or hypothesis to offer here, but while this was in my head I wanted to highlight the importance of this topic, and its apparent openness to investigation. While I have no current plans to study this, it seems quite amenable to study now, at least by folks who understand enough of both game theory and a wide range of social phenomena.  

Added 3Dec: Here is a recent paper on how easy mechanisms get when info is verifiable.

GD Star Rating
loading...
Tagged as: , ,

A New Truth Mechanism

Early in 2017 I reported:

This week Nature published some empirical data on a surprising-popularity consensus mechanism. The idea is to ask people to pick from several options, and also to have each person forecast the distribution of opinion among others. … Compared to prediction markets, this mechanism doesn’t require that those who run the mechanism actually know the truth later. … The big problem … however, is that it requires that learning the truth be the cheapest way to coordinate opinion. …. I can see variations on [this method] being used much more widely to generate standard safe answers that people can adopt with less fear of seeming strange or ignorant. But those who actually want to find true answers even when such answers are contrarian, they will need something closer to prediction markets.

In a new mechanism by Yuqing Kong, N agents simultaneously and without communication give answers to T questions, each of which has C possible answers. The clues that agents have about each question can be arbitrarily correlated, and agents can have differing priors about that clue distribution. However, clues must be identically and independently distributed (IID) across questions. If T ≥ 2C and N ≥ 2, then in this new mechanism telling the “truth” (i.e., answer indicated by clue) is a dominant strategy, with a strictly higher payoff if anyone else also tells the truth!

This is a substantial advance over the prior literature, and I expect future mechanisms to weaken the IID across questions constraint. Alas, even so this seems to suffer for the same key problem of needing truth to be the cheapest way for respondents to coordinate answers. I expect this problem to be much harder to overcome.

Of course if you add “truth speakers” as some of the agents, and wait for those speakers’ input before paying the other participants, you get something much closer to a prediction market.

GD Star Rating
loading...
Tagged as:

Occam’s Policy Razor

Nine experiments provide support for promiscuous condemnation: the general tendency to assume that ambiguous actions are immoral. Both cognitive and functional arguments support the idea of promiscuous condemnation. (More)

The world is full of inefficient policies. But why, when many can can simply and clearly explain why such policies are inefficient? The following concrete example suggests a simple explanation:

Logically, it doesn’t seem cruel to offer someone an extra option, if you don’t thereby change their other options. Two thirds of poll respondents agree re this prisoner case. However, 94% also think that the world media would roast any nation who did this, and they’d get away with it. And I agree with these poll respondents in both cases.

Most of the audience of that world media would not be paying close attention, and would not care greatly about truth. They would instead make a quick and shallow calculation: will many find this accusation innately plausible and incendiary enough to stick, and would I like that? If they answer is yes, they add their pitchforks to the mob. That’s the sort of thing I’ve seen with internet mobs lately, and also with prior media mobs.

As most of the world is eager to call the United States an evil empire driven by evil intent, any concrete U.S. support for torture might plausibly be taken as evidence for such evil intent, at least to observers who aren’t paying much attention. So even those who know that in such cases allowing torture can be better policy would avoid supporting it. Add in large U.S. mobs who are also not paying attention, and who might like to accuse U.S. powers of ill intent, and we get our situation where almost no one is willing to seriously suggest that we offer torture substitutes for prison. Even though that would help.

Similar theories can explain many other inefficient policies, such as laws against prostitution, gambling, and recreational drugs. We might know that such policies are ineffective and harmful, and yet not be able to bring ourselves to publicly support ending such bans, for fear of being accused of bad intent. This account might even explain policies to punish the rich, big business, and foreigners. The more that contrary policies could be spun to distracted observers as showing evil intent, the more likely such inefficient policies are to be adopted.

Is there any solution? Consider the example of Congress creating a commission to recommend which U.S. military bases to close, where afterward Congress could only approve or reject the whole list, without making changes. While bills to close individual bases would have been met with fierce difficult-to-overcome opposition, this way to package base closings into a bundle allowed Congress to actually close many inefficient bases.

Also consider how a nation can resist international pressure to imprison one disliked person, or to censor one disliked book. In the first case the nation may plead “we follow a general rule of law, and our law has not yet convicted this person”, while in the second case the nation may plead “We have adopted a general policy of free speech, which limits our ability to ban individual books.”

I see a pattern here: simpler policy spaces, with fewer degrees of freedom, are safer from bias, corruption, special-pleading, and selfish lobbying. A political system choosing from a smaller space of possible policies that will then apply to a large range of situations seems to make more efficient choices.

Think of this as Occam’s Policy Razor. In science, Occam’s Theory Razor says to pick the simplest theory that can fit the data. Doing this can help fractious scientific communities to avoid bias and favoritism in theory choice. Similarly, Occam’s Policy Razor says to limit policy choices to the smallest space of policies which can address the key problems for which policies are needed. More complexity to address complex situation details is mostly not worth the risk. This policy razor may help fractious political communities to avoid bias and favoritism in policy choice.

Yes, I haven’t formalized this much, and this is still a pretty sloppy analysis. And yes, there are in fact many strong criticisms of Occum’s Razor in science. Even so, it feels like there may be something to this. And futarchy seems to me a good example of this principle. In a futarchy with a simple value function based on basic outcomes like population, health, and wealth, then voting on values but betting on beliefs would probably mostly legalize things like prostitution, gambling, recreational drugs, immigration, and big business. It would probably even let prisoners pick torture.

Today we resist world mob disapproval regarding particular people we don’t jail, or particular books we don’t ban, by saying “Look we have worked out general systems to deal with such things, and it isn’t safe for us to give some folks discretion to make exceptions just because a mob somewhere yells”. Under futarchy, we might similarly resist world disapproval of our prostitution, etc. legalization by saying:

Look, we have chosen a simple general system to deal with such things, and we can’t trust giving folks discretion make policy exceptions just because talking heads somewhere scowl. So far our system hasn’t banned those things, and if you don’t like that outcome then participate in our simple general system, to see if you can get your desired changes by working through channels.

By limiting ourselves to simple general choices, we might also tend to make more efficient choices, to our overall benefit.

GD Star Rating
loading...
Tagged as: ,

Prestige Blocks Reform

At several recent conferences, I suggested to the organizers that I talk about social institution innovation, but they preferred I talk about my tech related work (or not talk at all). At those events they did have other people talk about social reforms and innovations, and all those speakers were relatively high status people with a background in “hard” sciences (e.g., physics or computers science). And to my eyes, their suggestions and analysis were amateurish.

Curious about this pattern, I did these Twitter polls:

So while more of us would rather hear about social analysis from a social expert, more of us would rather hear about social reform proposals from prestigious hard scientists. This makes sense if we see reform as a social coordination game: if we only want to support reforms that we expect to be supported by many high status folks, we need a high status advocates to be our focal point to get the ball rolling.

Alas, since hard scientists tend to know little social science and to think little of social scientists, the reforms they suggest tend to be low quality, at least by social scientist standards. Furthermore, since prestige-driven social systems have done well for them personally, and are said to do well in running their hard science world, they will tend to promote such systems as reforms. Alas, as I think replacing such systems should be one of our main social reform priorities.

GD Star Rating
loading...
Tagged as: ,

Firms & Cities Have Open Borders

Cities usually don’t much limit who can move there. If you can find someone in a city to give you a job, to rent you an apartment, to sell you food and other stuff, and to be your friends, etc. and if you can pay for your move, then you can move to that city from anywhere in a much larger region. Of course individual employers, landlords, and stores are mostly free to reject you, but the city doesn’t add much in the way of additional requirements. Same for other units smaller than a nation, such as counties and states.

Large firms also don’t usually much limit who can work there. Oh each particular small work group is usually particular about who works there, but the larger firm will mostly defer to local decisions about hires. Yes, if the larger firm has made commitments to trying never to fire anyone, but to always find someone another place in the firm when they are no longer wanted in any one part, then that larger firm may put more limits on who and how many folks can be hired by any one small group. But when the larger firm has few obligations to local workers, then local groups are also mostly free to hire who they want.

The obvious analogies between cities, firms and nations make it somewhat puzzling that nations are much more eager to limit who can enter them. The analogy is strongest when those who enter nations can only do so in practice if they can find local employers, landlords, suppliers, friends, etc. willing to deal with them. And when the nation assigns itself few obligations to anyone who happens to live there.

Yes, in principle there can be externalities whereby the people who enter one part of a nation effect the enjoyment and productivity of people in other parts of a nation. But those same sort of effects should also appear within parts of a city or of a firm. So why don’t cities and firms work harder to limit local choices of who can enter them?

You might claim that cities and firms don’t need to attend to limiting entry because nations already do it for them. But most nations already have a lot of internal variation; why is none of that variation of interest to cities and firms, yet the variation between nations would supposedly be of huge interest, if nations were not handling that? And firms and cities within the nations that hold all those bad people that you think good nations are focused on excluding don’t have firm- or city-wide exclusion policies either.

Furthermore, many multinational firms today already have employees who are spread across a great many nations, nations that vary a lot in wealth, development, etc. Yet such multinationals usually don’t have much in the way of centralized limits on who can be hired by their divisions in different firms, nor on who can be transferred between such divisions. These firms may face limits imposed by nations, but they seem to mostly lament such limits, and aren’t eager to add more.

Firms and cities live in more competitive environments than do nations. So we should expect their behavior to be shaped more by competitive pressures. Thus we can conclude that competitive entities tend not to create entity-wide limits on who can enter them; they are mostly content to let smaller parts of them made those decisions.

So if nations act differently from firms and cities, that should be because either:
1) there are big important effects that are quite different at the national level, than at firm and city levels, or
2) nations are failing to adopt policies that competition would induce, if they faced more competition.

My bet is on the latter. In that case, the key question is: is there a market failure that makes the entry policies that competition pushes lamentable? If not, we should lament that competition isn’t inducing more free entry into nations. That is, we should lament that competition isn’t inducing national open borders, like we mostly have for cities and firms.

GD Star Rating
loading...
Tagged as: , , ,

Why Not Also Punish False Praise?

I recently read on social media praise for someone I know, someone about whom I know some negative things. I realized that if I posted my negative comments, those would be held to much higher standards than are positive comments. I might be sued for defamation, and many would apply a social norm to me which demands that one defend negative comments with concrete supporting evidence. We don’t have such a norm regarding positive comments.

While the Romans allowed one to sue for damages when someone defamed you even by saying true things, we today only allow that when someone says false negative things, although at common law the burden of proof is on the person accused of defamation to prove their negative claim. The message is: don’t say negative things about others in public if you can’t prove them in court.

Presumably the reason we now allow suits for false defamation is that we see a net social harm there; others are liable to be misled, causing misallocations of resources and relations. In addition, resources may be wasted in back-and-forth defamation battles. But it seems to me that we should also expect similar social harms to result from false positive comments, not just false negative comments. So maybe we should consider having law discourage those as well.

With negative comments it is the defamer who pays the person defamed, even though it is the larger society who in fact suffers the net social harm. The person defamed is just a convenient party we give an incentive to sue. But defamation law would serve a similar social function if we turned it into a bounty, where anyone could sue and collect it. So an obvious option for false positive comments would be to make that into a bounty.

It seems counterproductive to expect the person who is falsely praised to sue someone for doing that. Their incentive can be weak, and if they win they gain twice, from the false claim and from the suit. So my proposal is: let anyone sue re a false positive claim, the first person to succeed gains a bounty amount equal to the court’s estimate of the false gain that resulted. Again put the burden of proof on the person who made the claim. So just as with defamation today, the bounty hunter would have to show some substantial net monetary equivalent gain to the person who was falsely praised, and that could be the amount awarded to that hunter.

Yes, in our world where false praise isn’t punished there’s a lot of it, which isn’t believed so much, and thus each instance causes less harm. But that would also be true if we didn’t allow suing for defamation; a lot more criticism would happen, which would be believed less. If this isn’t a reason to allow defamation, it isn’t a reason not to allow suits against false praise.

Of course, I don’t expect people to leap to implement my proposal. I offer it as a thought experiment, to help us think about *why* we don’t like this, even though its justification seems similarly strong to our usual justification for allowing false defamation lawsuits. Why is false praise seen as so much less harmful than false criticism?

GD Star Rating
loading...
Tagged as: