Author Archives: Robin Hanson

For Fast Escaped Pandemic, Max Infection Date Variance, Not Average

In an open column, … to provide greater dispersion, the vehicle distance varies from 50 to 100 meters, … distance between dismounted soldiers varies from 2 to 5 meters to allow for dispersion and space for marching comfort. (More)

The troop density has decreased through military history in proportion to the increase in lethality of weapons being use in combat. (More)

Armies moving in hostile areas usually spread out, as concentrations create attractive targets for enemy fire. For soldiers on foot, it might be possible to try to induce such dispersion by having a vicious wild animal chase them. After all, in the process of running fast to escape, they might spread out more than they otherwise might. But this would be crazy – there’s no reason to think this would induce just the right level of dispersion, and it would have many bad side effects. Better just to order soldiers to deliberately space the right distance. 

For a very infectious pandemic like COVID-19, clearly not contained and with no strong treatment likely soon, the fact that medical resources get overwhelmed toward a pandemic peak creates a big value in dispersion – spreading out infection dates. But, alas, our main method is that crazy “chased by a wild animal” approach, in this case chased by the virus itself. 

That is, each person tries to delay their infection as long as possible, in part via socially destructive acts like staying home instead of working. Like soldiers running from a wild animal, our varying efforts at delay do create some variance as a side effect. But probably less than optimal variance, and at great cost. 

Yes, delay has some value in allowing more stockpiling. For example, we should (but apparently aren’t) mass training more medical personnel who can function in makeshift ICU tents. But increasing average delay is can be less valuable than increasing delay variance. Even if we can’t just tell each person when to get infected, like telling soliders where to walk, we have several relevant policy levers. 

First, as I’ve discussed before, we might pay people to be deliberately exposed, and covering the cost of their medical treatment and quarantine until recovery. Yes, if their immunity has a limited duration, then we might want to not start deliberate exposure until there’s less than that duration before the pandemic peak. But there’s still big potential value here, especially via targeting medicine and critical infrastructure workers. 

Second, this is a situation were inequality of wealth, health, and social connections is good. In the last few years, many have loudly lamented many kinds of social inequalities that make the low feel ashamed and unloved, resulting in their more often becoming lonely and sick. Some are enough friends and money that they can afford go to all the parties, while others suffer in poverty alone. And no doubt many will cry loudly when such inequality makes the low get infected before the high.

But however bad such inequality might usually be, in a pandemic it is exactly what the doctor should order, if he could. Among a community close enough to share the same medical resources, the more that individuals vary in their likeliness of catching and passing on the pandemic, the better! Those who catch it early or late will do better than those who catch it just at the peak.  So for this pandemic, let’s maybe back off on whatever we now do to cut inequality, and maybe even open up more to whatever we are not doing that could increase inequality. 

In my next post, I’ll describe some simple concrete sim models supporting these claims.

GD Star Rating
loading...
Tagged as: , , ,

Why Big Implicit Deals?

It takes some effort to formally write, review, and sign contracts. And to have courts enforce them. So it makes sense to not bother for deals that are too small, or that are observed and repeated enough for reputation or repeated play incentives to be sufficient. But we have a few big deals in life where these don’t apply, and yet we still don’t tend to write explicit formal contracts, nor allow negotiated exceptions. 

For example, when getting married, joining a religion, joining a profession, or becoming a citizen. We tend to talk about such things as if they were deals, especially when criticizing folks who seem to have reneged. But we aren’t very formal or clear about what exactly one is agreeing to in such cases, and we discourage the negotiation of variations on standard deals. Marriage prenups are frowned on, and often not enforced by courts. Even though people sometimes pray “God, please, if you’ll do this, then I’ll do that,” theologians offer little hope for such deals. And professions and states almost never allow negotiated alterations to their standard deals.

Yes, these can be complex relations, and it is hard for explicit contracts to cover all relevant cases or details. But that is also true for business deals, where we do typically make explicit, if far from complete, contracts. Yes, by forgoing formal contracts we can signal confidence in our shared good will and emotional inclinations to make good on our promises. But that is also true about business deals. 

Yes, implicit deals better support hypocrisy, wherein we pretend to promise things that we probably won’t deliver. But there is much hypocrisy in business too. Yes, by preferring standardized conformist deals, we tend to avoid nonconformists, who on average are more error-prone and less capable. But that is also true in business.

The biggest difference I see between typical business and other deals is that business relations often have a lot more contextual variation that can be usefully addressed via explicit negotiated contract terms. There are so very many kinds of business deals for so many different situations. In contrast, marriages are more alike; the couples I see making explicit marriage contracts are those with unusual tastes or situations. Religions, professions, and states have less need of differing deals for differing members, and they fear some secretly getting better deals than others.

Thus it makes sense that it is mainly in business relations that we usually pay the many real costs to create formal explicit contracts. Thus people who want to disrespect, hurt and tax business can more safely achieve that via adjusting contract law, without risking much harm to these other types of deals, for which they have more respect. 

GD Star Rating
loading...
Tagged as: ,

Plot Holes & Blame Holes

We love stories, and the stories we love the most tend to support our cherished norms and morals. But our most popular stories also tend to have many gaping plot holes. These are acts which characters could have done instead of what they did do, to better achieve their goals. Not all such holes undermine the morals of these stories, but many do.

Logically, learning of a plot hole that undermines a story’s key morals should make us like that story less. And for a hole that most everyone actually sees, that would in fact happen. This also tends to happen when we notice plot holes in obscure unpopular stories.

But this happens much less often for widely beloved stories, such as Star Wars, if only a small fraction of fans are aware of the holes. While the popularity of the story should make it easier to tell most fans about holes, fans in fact try not to hear, and punish those who tell them. (I’ve noticed this re my sf reviews; fans are displeased to hear beloved stories don’t make sense.)

So most fans remain ignorant of holes, and even fans who know mostly remain fans. They simply forget about the holes, or tell themselves that there probably exist easy hole fixes – variations on the story that lack the holes yet support the same norms and morals. Of course such fans don’t usually actually search for such fixes, they just presume they exist.

Note how this behavior contrasts with typical reactions to real world plans. Consider when someone points out a flaw in our tentative plan for how to drive from A to B, how to get food for dinner, how to remodel the bathroom, or how to apply for a job. If the flaw seems likely to make our plan fail, we seek alternate plans, and are typically grateful to those who point out the flaw. At least if they point out flaws privately, and we haven’t made a big public commitment to plans.

Yes, we might continue with our basic plan if we had good reasons to think that modest plan variations could fix the found flaws. But we wouldn’t simply presume that such variations exist, regardless of flaws. Yet this is mostly what we do for popular story plot holes. Why the different treatment?

A plausible explanation is that we like to love the same stories as others; loving stories is a coordination game. Which is why 34% of movie budgets were spent on marketing in ’07, compared to 1% for the average product. As long as we don’t expect a plot hole to put off most fans, we don’t let it put us off either. And a plausible partial reason to coordinate to love the same stories is that we use stories to declare our allegiance to shared norms and morals. By loving the same stories, we together reaffirm our shared support for such morals, as well as other shared cultural elements.

Now, another way we show our allegiance to shared norms and morals is when we blame each other. We accuse someone of being blameworthy when their behavior fits a shared blame template. Well, unless that person is so allied to us or prestigious that blaming them would come back to hurt us.

These blame templates tend to correlate with destructive behavior that makes for a worse (local) world overall. For example, we blame murder and murder tends to be destructive. But blame templates are not exactly and precisely targeted at making better outcomes. For example, murderers are blamed even when their act makes a better world overall, and we also fail to blame those who fail to murder in such situations.

These deviations make sense if blame templates must have limited complexity, due to being socially shared. To support shared norms and morals, blame templates must be simple enough so most everyone knows what they are, and can agree on if they match particular cases. If the reality of which behaviors are actually helpful versus destructive is more complex than that, well then good behavior in some detailed “hole” cases must be sacrificed, to allow functioning norms/morals.

These deviations between what blame templates actually target, and what they should target to make a better (local) world, can be seen as “blame holes”. Just as a plot may seem to make sense on a quick first pass, with thought and attention required to notice its holes, blame holes are typically not noticed by most who only work hard enough to try to see if a particular behavior fits a blame template. While many are capable of understanding an explanation of where such holes lie, they are not eager to hear about them, and they still usually apply hole-plagued blame templates even when they see their holes. Just like they don’t like to hear about plot holes in their favorite stories, and don’t let such holes keep them from loving those stories.

For example, a year ago I asked a Twitter poll on the chances that the world would have been better off overall had Nazis won WWII. 44% said that chance was over 10% (the highest category offered). My point was that history is too uncertain to be very sure of the long term aggregate consequences of such big events, even when we are relatively sure about which acts tend to promote good.

Many then said I was evil, apparently seeing me as fitting the blame template of “says something positive about Nazis, or enables/encourages others to do so.” I soon after asked a poll that found only 20% guessing it was more likely than not that the author of such a poll actually wishes Nazis had won WWII. But the other 80% might still feel justified in loudly blaming me, if they saw my behavior as fitting a widely accepted blame template. I could be blamed regardless of the factual truth of what I said or intended.

Recently many called Richard Dawkins evil for apparently fitting the template “says something positive about eugenics” when he said that eugenics on humans would “work in practice” because “it works for cows, horses, pigs, dogs & roses”. To many, he was blameworthy regardless of the factual nature or truth of his statement. Yes, we might do better to instead use the blame template “endorses eugenics”, but perhaps too few are capable in practice of distinguishing “endorses” from “says something positive about”. At least maybe most can’t reliably do that in their usual gossip mode of quickly reading and judging something someone said.

On reflection, I think a great deal of our inefficient behavior and policies can be explained via limited-complexity blame templates. For example, consider the template:

Blame X if X interacts with Y on dimension D, Y suffers on D, no one should suffer on D, and X “could have” interacted so as to reduce that suffering more.

So, blame X who hires Y for a low wage, risky, or unpleasant job. Blame X who rents a high price or peeling paint room to Y. Blame food cart X that sells unsavory or unsafe food to Y. Blame nation X that lets in immigrant Y who stays poor afterward. Blame emergency room X who failed to help arriving penniless sick Y. Blame drug dealer X who sells drugs to poor, sick, or addicted Y. Blame client X who buys sex, an organ, or a child from Y who would not sell it if they were much richer.

So a simple blame template can help explain laws on min wages, max rents, job & room quality regs, food quality rules, hospital care rules, and laws prohibiting drugs, organ sales, and prostitution. Yes, by learning simple economics many are capable of seeing that these rules can actually make targets Y worse off, via limiting their options. But if they don’t expect others to see this, they still tend to apply the usual blame templates. Because blame templates are socially shared, and we each tend to be punished from deviating from them, either by violating them, or failing to disapprove of violators.

In another post soon I hope to say more about the role of, and limits on, simplified blame templates. For this post, I’m content to just note their central causal roles.

Added 8am: Another key blame template happens in hierarchical organizations. When something bad seems to happen to a division, the current leader takes all the blame, even if recently replaced prior leader. Rising stars gain by pushing short term gains at the expense of long term losses, and being promoted fast enough so as not to be blamed for those losses.

Re my deliberate exposure proposal, many endorse a norm that those who propose policies intended to combine good and bad effects should immediately cause themselves to suffer the worst possible bad effects personally, even in the absence of implementing their proposal. Poll majorities, however, don’t support such norms.

GD Star Rating
loading...
Tagged as: , ,

Deliberate Exposure Intuition

Many have expressed skepticism re my last post on controlled exposure. So let me see if I can’t communicate my intuition more clearly, so we can all examine it more carefully.

Assume we have a virus like COVID-19, highly infectious and substantially deadly, not blocked or cured by any known or soon-coming treatment. It takes up to 2+ weeks from exposure to death or recovery, and advanced medical resources like ICUs can cut death rates. Even with unusually strong quarantine efforts, COVID-19 currently seems to be escaping from its initial region and nation, doubling roughly every week. Even if that growth rate falls by a factor of three on average, it will reach most of the world within a year.

At which point roughly half of the world who isn’t immune gets infected over a perhaps two week period. Medical resources are completely overwhelmed, so ICUs save only a few. And the world economy takes a huge hit; for perhaps months before that point most workers have stayed home from work in an eventually futile effort to avoid exposure. At the worst possible moment, food, trash, cleaning, heating, and cooling may be scarce, increasing the fraction of sick who die.

To deal with this crisis, there are two key kinds of resources: medicine and isolation. With limited medical resources, including medical workers, we can treat the sick, and cut their chance of dying. We also have a limited set of quarantine resources, i.e., places where we can try to isolate people, places that vary in their health support and in their rate of infection leakage in and out. If we put the more likely infected into stronger isolation, that slows the disease spread.

Consider three different policy scenarios, based on three different policy priorities.

First, consider a policy that prioritizes immediate-treatment. This is a common priority in our medical systems today. Each day, medical and quarantine resources are devoted to the individuals for whom they seem most most-effective in keeping that person alive over the next few days. So hospital ICUs hold the patients whom ICUs can most help now. And the best quarantine locations are allocated to the apparently not-infected at most risk of dying if infected. (Such as the old.) Workers are allowed to stay home from work if they think that will increase personal safety.

In this scenario, medical and critical infrastructure workers may not be given priority in quarantines or medial treatment. So medical workers are culled earlier than others due to their extra contact with the sick, and most medical workers may be sick or stay home near the peak of the epidemic, which is a pretty sharp peak. Most workers in critical infrastructure may be home then too, and may have been there a while. Worse, by allocating isolation resources according to a risk of dying if infected, a treatment-focused policy does little to slow the disease spread.

Next, consider a policy that more prioritizes containment. This is the usual priority of public health today facing a new contagious disease. Here more people become more isolated, and the best isolation resources are allocated much more to those most likely to be recently infected, not to those most likely to die if infected. Efforts may be made here to isolate medical workers, even if that results in worse individual treatment.

This priority can make sense given a substantial chance that the disease can be stopped from spreading beyond an initial area. Even if spread seems inevitable eventually, a containment priority also makes sense if that policy makes an effective treatment substantially more likely to be found before this disease spreads to most everyone. Or if more medical or isolation resources can be created in the extra time. Hope springs eternal, and it feels good to assume the best and act on hope.

But what if there is little hope of containing or treating the disease before most everyone is exposed? And what if getting sick and then recovering often gives someone substantial immunity to the disease for a period? After all, if everyone is constantly exposed, the recovered quickly get sick again, and this infection has high mortality, then death is coming soon no matter what. So we must hope for some immunity.

For this situation, consider a policy that prioritizes long-term treatment-resources. Most everyone will be exposed within a year or so, and unless they are immune they will get sick, at which point their chance of recovery instead of death should depend on medical resources, and critical infrastructure, at that time. So this policy seeks to create a pattern of isolation, and possibly deliberate exposure, to increase the average resources available to help people recover when they are sick.

The obvious problem here is that the above scenarios can have a pretty sharp peak in infection rates, overwhelming medical resources at that point in time. And workers who stay home also threaten the availability of other critical infrastructure resources. Yes, if containment slows the rate of growth of the disease, it also spreads out the time period of peak infection by a similar factor. But that could still be pretty short period.

Relative to the containment policy, this long term resource policy would seek to move the time of infection of many people from near the peak, to substantially earlier than the peak. Moving to later than the peak is not possible, if we’ve been containing as much as possible. And the obvious way to infect people earlier is to directly expose them, on purpose.

Of course directly exposing people won’t help spread out the peak if the people exposed are isolated to the same average degree as people are in the containment scenario. That would instead just move the peak to an earlier point in time, and perhaps even make it sharper, by making the disease spread faster. So this long-term treatment policy would have to involve infecting some people deliberately, while giving them much higher than average quality isolation. If their isolation were very good, then they’d use medical resources at an earlier point in time when such resources are more available, without adding much to the overall growth of the disease.

Now, if good isolation resources, and medical resources, were already strained dealing with a flux of likely infected from outside, then there might be little point in adding new infected on purpose. But what if there are many good isolated places available not being fully used to deal with folks very likely to have been exposed, and also available medical resources not fully used, what if recovered folks had little risk of infecting others for a period, and what if we were closer in time to the peak than that average period between reinfections? Well that’s when we might tempted to deliberately expose some, and then to strongly isolate them.

One key idea here is to create a stronger correlation between the strength of isolation of a place and the likelihood that people there are infected. Such a strong correlation allows us to create a population of already recovered folks who are at least temporarily immune. And that can spread out the period of peak infection, so that more medical resources are available to treat the sick. And that can cut the average mortality rate, which means that more people don’t die.

People who work in medicine and critical infrastructure seem especially promising candidates for early deliberate exposure. This is because after recovery they become more available to work during the peak infection period. They are not sick then, and are less afraid of being exposed then, making it easier to persuade them to go to work. The other set of promising candidates are those most likely to die without sufficient help, which seems to be men and especially the old.

And that’s the intuition behind deliberate exposure. Its wisdom depends on some parameters of which we are unsure, and may learn more about soon. So it seems clearer that we should think more about such options than that we should pull the trigger to start one now. And there are substantial challenges in organizing such a policy fast enough, and in gaining sufficient public and elite support to allow it. Maybe this can’t work this time, and must wait until another big pandemic.

But contrary to many loud and rude commenters lately, this option isn’t crazy. And within the next year we may come to see and suffer the full consequences of not working harder to spread out a pandemic peak of maximum infection and medical need.

Added 2p: The obvious easy win policy solution (given that key assumptions hold) here is just to make it easy for people to volunteer for (1) exposure to virus, (2) strong 24 day quarantine, (3) medical help while there, (4) regular checkups afterward. Create a place where people can go to do this, an easy way to sign up legally, and pay to expose and house them there. Maybe even pay them extra if they work in medicine or critical infrastructure. (Btw, as such an option isn’t now available, and I don’t work in critical infrastructure, it wouldn’t help society much for me to just “go infect yourself”, as many have suggested in so many colorful ways. And as I don’t own my family, I can’t volunteer them.)

GD Star Rating
loading...
Tagged as: , ,

Consider Controlled Infection

In many places long ago, in families with many kids, as soon as one kid caught an illness, parents would put the other kids in close contact, so they could all catch it at once. Because it was less trouble to care for all the kids in a family at once than to care for them one at a time.

Should we also consider controlled infection to deal with our current pandemic? Like controlled burns that prevent later larger fires, it might be a good idea to expose some people early on purpose.

Today a coronavirus is spreading rapidly across China, and the world, and many are trying hard to resist that spread. One obvious reason to resist is the hope that the spread can be completely stopped, limiting how many are exposed. However, once a contagious enough virus has spread to enough people and places, this scenario becomes quite unlikely; the virus will soon spread to most everywhere that isn’t high isolated.

Unfortunately, we are probably already past this point of no return with coronavirus. It seems to spread easily, apparently including via people who are contagious but don’t show symptoms. It already seems to have spread from its initial region to infect many people in a great many other Chinese cities and regions (thousands infected, dozens dead). And that’s with keeping everyone home from work, which can’t last much longer. Once this virus comes to infect most of China, it seems hard to imagine a strong enough China wall (a 24-day quarantine for everyone leaving) to keep it from spreading further. Especially since China & WHO are arguing against such a wall, and we already have confirmed a few hundred cases outside China; they’ve doubled every week for four weeks.

Another reason to resist virus spread is in the hope that a vaccine (or other effective treatment) will be available before it spreads everywhere, stopping the spread at that point. There’s some hope for a drug soon to prevent infections, but the odds are poor and if that doesn’t work prospects are dim. Alas “typically, making a new vaccine takes a decade or longer”, and estimates for this case are at least 18 months. That doesn’t include time to manufacture and distribute it, once we know how to make it.

As of yesterday, total known deaths were 1384, a number that’s had a 6 day doubling time lately. (A very different method estimates 7 day doubling.) At that rate, in four months deaths go up by a factor of a million, which is basically the whole planet. So unless long-term growth rates slow by more than a factor of four, there’s probably not time for a vaccine to save us.

If the virus spreads to most of the world, so most everyone is exposed, then the fraction of the world that dies depends how deadly is the virus, which we just don’t know, and can’t control. Maybe we’ll get lucky, and this one isn’t much worse than influenza. But we are probably not so lucky. The fraction of the world that dies also depends our systems of social support, which we can do more to influence.

I’m not a medical professional, so I can’t speak much to medical issues. But I am an economist, so I can speak to social support issues. I see two big potential problems. One is that our medical systems have limited capacities, especially for intensive care. So if everyone gets sick in the same week or two, not only won’t the vast majority get much of help from hospitals, they may not even be able to get much help from each other, such as via cleaning and feeding. Perhaps greatly increasing death rates. This problem might be cut if we spread out the infection out over time, so that different people were sick at different times.

The other related problem is where many non-sick people stay away from work to avoid getting sick. If enough people do this, especially at critical infrastructure jobs, then the whole economy may collapse. And not only is a collapsed economy bad for most everyone, sick people do much worse there. Not only can’t they get to a doctor or hospital, they might not even be able to get food or heating/cooling. Infected surfaces don’t get cleaned, and maybe even dead bodies don’t get removed. Thieves don’t get stopped. And so on. We can already see social support partially collapsing in Wuhan now, and it’s not pretty.

There’s an obvious, if disturbing, solution here: controlled exposure. We could not only insist that critical workers go to work, but we might also choose on purpose who gets exposed when. We can’t slow down infection very much, but we can speed it up a lot, via deliberately exposing particular people at particular times, according to a plan.

Such a plan shouldn’t just expose random people early, as they’d be likely to infect others around them. Instead, groups might be taken together to isolated places to be exposed, or maybe whole city blocks could be isolated and then exposed at once. Exposed groups should be kept strongly isolated from others until they are not longer very infectious.

Those who work in critical infrastructure, especially medicine, are ideal candidates to go early. Such a plan should only expose a small fraction of each critical workforce at any one time, so that most of them remain available to keep the lights on. If critical workers could be moved around fast enough, perhaps different cities could be exposed at different times, with critical workers moving to each new city to be ready to keep services working there.

Such plans can help even if some people who are infected and recover can get reinfected later. As long as being infected gives enough people enough immunity for a long enough time period, that is enough for this plan to spread out the infections over a time period of similar duration, so medical service needs don’t all appear together. Even an immunity of only two months, which is extremely short compared to most diseases, would allow a lot of spreading.

People selected to be exposed earlier might be paid extra cash, to compensate for perceived extra risk. (Maybe X days worth of their usual wages, so as not to especially select the poor.) Or perhaps they could be paid in extra priority for sick associates if medical help is rationed later. (I’d seriously consider both kinds of offers.) We might even be able to implement a whole plan like this entirely via volunteers, though adding that constraint may make a strong plan harder to design. A compromise might be to let city blocks vote on if to be paid to go early together. I’m willing to help in design work on this, if that could help make the difference.

I don’t have a detailed plans to offer, and obviously any such plans should be considered very carefully. Also obviously, such plans might face strong opposition, which could undermine them. If they were designed or implemented badly, they might even make things worse. But the alternative is to risk having large fractions of the population get sick at once, while the economy collapses due to critical workers staying home to avoid getting sick. A scenario which could end up a lot worse.

So authorities, and the rest of us, should at least consider controlled infection as a future option. I’m not saying we should start such a plan now; maybe that drug will work, and it will all be over soon. But if not, we should start to ask when we might learn what could help us decide, what might be a good time to pull the trigger on such a plan, and how to prepare earlier for the possibility of wanting to pull such a trigger later.

Added 17Feb: See also my next post elaborating the intuition behind why and when deliberate exposure could make sense.

Added 03Mar: See also my spreadsheet model, and further discussion.

Added 15Mar: See also elaborations of spreadsheet model.

GD Star Rating
loading...
Tagged as: , ,

Defrock Deregulation Economists?

Recent economics Nobel prize winner Paul Romer is furious that economists have sometimes argued for deregulation; he wants them “defrocked”, & cast from the profession: 

New generation of economists argued that tweaks … would enable the market to regulate itself, obviating the need for stringent government oversight. … To regain the public’s trust, economists should … emphasize the limits of their knowledge … even if it requires them to publicly expel from their ranks any member of the community who habitually overreaches. …

Consider the rapid spread of cost-benefit analysis … Lacking clear guidance from voters, legislators, regulators, and judges turned to economists, who resolved the uncertainty by [estimating] … the amount that society should spend to save a life. … [This] seems to have worked out surprisingly well … The trouble arose when the stakes were higher … it is all too easy for a firm … to arrange for a pliant pretend economist to … [defend them] with a veneer of objectivity and scientific expertise. …

Imagine making the following proposal in the 1950s: Give for-profit firms the freedom to develop highly addictive painkillers and to promote them via … marketing campaigns targeted at doctors. Had one made this pitch to [non-economists] back then, they would have rejected it outright. If pressed to justify their decision, they [would have said] … it is morally wrong to let a company make a profit by killing people … By the 1990s, … language and elaborate concepts of economists left no opening for more practically minded people to express their values plainly. …

Until the 1980s, the overarching [regulatory] trend was toward restrictions that reined in these abuses. … United States [has since been] going backward, and in many cases, economists—even those acting in good faith—have provided the intellectual cover for this retreat. …

In their attempt to answer normative questions that the science of economics could not address, economists opened the door to economic ideologues who lacked any commitment to scientific integrity. Among these pretend economists, the ones who prized supposed freedom (especially freedom from regulation) over all other concerns proved most useful …  When the stakes were high, firms sought out these ideologues to act as their representatives and further their agenda. And just like their more reputable peers, these pretend economists used the unfamiliar language of economics to obscure the moral judgments that undergirded their advice. …

Throughout his entire career, Greenspan worked to give financial institutions more leeway … If economists continue to let people like him define their discipline, the public will send them back to the basement, and for good reason. …

The alternative is to make honesty and humility prerequisites for membership in the community of economists. The easy part is to challenge the pretenders. The hard part is to say no when government officials look to economists for an answer to a normative question. Scientific authority never conveys moral authority. No economist has a privileged insight into questions of right and wrong, and none deserves a special say in fundamental decisions about how society should operate. Economists who argue otherwise and exert undue influence in public debates about right and wrong should be exposed for what they are: frauds. (more)

Oddly, Romer is famous for advocating “charter city” experiments, which can be seen as a big way to escape from the usual regulations.

So how does Romer suggest we identify “pretend” economists who are to be “exposed as frauds” and “publicly expelled from economists’ ranks”? He seems to say they are problematic on big but not small issues because firms bribe them, but he admits some are well-meaning, and doesn’t accuse Greenspan of taking bribes. So I doubt he’d settle for expelling only those who are clearly bribed. 

That seems to leave only the fact that they argue for less regulation when common moral intuitions call for more. (Especially when they mention “freedom”.) Perhaps he wants economists to be expelled when they argue for deregulation, or perhaps when they offer economic analysis contrary to moral intuitions. Both sound terrible to me as intellectual standards.

Look, people quite often express “moral” opinions that are combinations of simple moral intuitions together with intuitions about how social systems work. If they are mistaken about that second part, and if we can gain separate estimates on their moral intuitions, then economic analysis has the potential to produce superior combinations.

This is exactly what economists try to do when applying value of life estimates, and this can also be done regarding deregulation. The key point is that when people act on their moral intuitions, then we can use their actions to estimate their morals, and thus include their moral weights in our analysis.

In particular, I don’t find it obviously wrong to let for-profit firms market drugs to doctors, nor do I think it remotely obvious that this is the main cause of a consistent four-decade rise in drug deaths.

Yes of course, it is a problem if professionals can be bribed to give particular recommendations. But in most of these disputes parties on many sides are willing to offer such distorting rewards. My long-standing recommendation is to use conditional betting markets to induce more honest advice from such professionals, but so far few support that.

GD Star Rating
loading...
Tagged as: , ,

Respectable Rants

I’m not very impressed with most political arguments, especially those targeted at mass audiences. I don’t mind such things being informal, passionate, rude, speculative, rambling, or redundant. But I need them to address what I see as key issues. Yes, my tastes may be unusual, but there are many others like me. So let me explain what I want to hear in a good political “rant”.

Don’t Exaggerate – You know who you are, and you know what I mean. There is plenty enough at stake in most areas to motivate me without your exaggerating. At least pretend toward honesty. All of history isn’t at stake, and no this by itself won’t decide between freedom and despotism. Yes, I can roughly correct for your exaggerations, so this item does the least harm. But it still bugs me.

Admit Tradeoffs – We usually can’t get more of something good without also getting less of something else good. Or more of something bad. I might be willing to go for the package, but don’t pretend there won’t be costs. If this choice isn’t new, tell me why we made the wrong tradeoff before. If this used to be a private choice, explain why private choices about this tend to go wrong.

Show Search – The world is complex, our systems in it have many parts, and things keep changing. So much of finding better policy consists of searching in a vast space of possible system-situation combos. Don’t pretend that the best combo is obvious, or that you are sure what will happen under your favored option. Tell me about what options we’ve tried, what we’ve seen there, and about new promising combinations. Tell me about key design principles, and how you may have found a rare design option that happens to embody many good design principles at once.

Prepare To Learn – This is the most important, and neglected, item. Don’t just tell me you have a plan, with details on request. Tell me how we will learn to adapt and improve your plan. What size experiments do we start with, where, and measured how? How we will change our designs in response in new iterations? Don’t tell me we will all make those decisions together, that just won’t work. Instead, tell me who will make those decisions, and especially, what will be their incentives to do this well.

If you want to just copy something that’s worked out pretty well elsewhere, okay maybe I mainly want to hear about tradeoffs seen there. Data. But if you want to do something new, then I need to hear a lot more about your learning plan. Especially when your proposal has a wide scope, its outcomes are hard to measure, and take a long time to be revealed.

Look, our main social problem is how to organize activity so that we can learn together how to be productive and useful to each other. There are other problems, but they are minor by comparison. Somehow each of us must react to the signals our world sends us, and send our own signals in response, to induce all the stuff that needs to happen, and efficiently and well. It is all terribly complex, but also terribly important.

Every policy proposal is of some way to change this huge system. We need some theory not only to estimate consequences of your proposal, but also to deal with its many unanticipated consequences later. Please give me some indication of what theories you’ll rely on. The weaker the theories you need, the better of course, but you’ll need something.

For example, if you propose to nationalize US medicine, tell me which other nationalized system you plan to copy. How does it decide on which treatments are covered, and where new facilities are built? How are doctors evaluated and if needed disciplined? How do patients express their differing individual preferences within this system? And since those other systems don’t contribute much to global medical innovation, tell me that you are okay with a big reduction in global medical innovation, or tell me how your system will be different enough to promote a lot more innovation.

For example, if you propose to regulate social media to be less addictive, stressful, and fake-news-promoting, tell us exactly what is the scope of powers you propose to grant regulators, what standards they will use to measure such things, and how the rest of us are to judge if they do a good job. Is this new proposed feedback process plausibly more effective than each of us individually switching our social media platforms when we feel addicted, stressed, or faked?

As you know, most political discourse purposely avoids most of what I’ve asked for here. Advocates instead tend to frame each dispute as a simple and fundamental moral choice. Details are avoided, dangers are exaggerated, and tradeoffs, search, and learning are rarely unacknowledged as issues. Politicians refer to goals and avoid talking about difficulties of implementation, incentives, measurement, or learning.

And that’s a big reason to be wary of letting political systems manage complex things of wide scope. When I buy something from a private source, they tend to say more about details, about how to measure payoffs, and about how they and I will learn about what works best. Maybe not an ideal amount, but definitely more. They more tell me what I want to hear in a rant, or an ad. If you want to make a to-my-ears good political rant, learn a bit from them.

GD Star Rating
loading...
Tagged as: ,

Explainable Governance

My once bumper sticker: “Question Authority, But Raise Your Hand First”.

In families and small groups, we can usually challenge and question our leaders. When they declare an official policy, we can often ask “why?”, and get a moderately coherent response. If we notice inconsistencies between explanations for related policies, we can point them out, and pressure leaders to reduce them. And to a limited extent, we can challenge such explanations, giving counter arguments to official reasons, and offering reasons for alternate policies.

Of course sometimes busy or exasperated parents, and other authorities, retreat to “because I said so”. But if they go there too often or quickly, we think less of them, share that opinion via gossip, and undermine their authority and tenure.

However, as our social systems get larger, we tend to lose this crucial human option, to see and challenge justifications for the policies we live under. Oh sure, some justifications are offered, but such things tend to be more rare, shallow, inconsistent, and unresponsive to criticism.

Today when someone tells you can’t do that, there’s a rule, and you ask why, you mostly get shoulder shugs or vague platitudes. When you hear reasons, there usually seems little point in pointing out contradictions. As a result, you feel disrespected, and these rules feel less legitimate. All of which probably contributes to our living under a less justified and coherent total set of policies.

Can we do better? I can imagine a legal requirement that all laws and agency rules have an explicit justification text, but I doubt that would create substantially better justifications than we see now.

I’ve recently tried to think about how to make our systems of governance better at offering persuasive justifications. To explain my ideas, I will make some simplifying assumptions. Not because I’m sure I need them, but because they seem to make this initial concept exploration easier.

First, let us assume that each policy has a text description, and applies to a subset of the space of policy applications. Each policy also has a text justification, and an author. Assume none of these subsets overlap partially; that is, if there’s any overlap, then one is a strict subset of the other. For each such set, we can (somehow) create a rough dollar estimate of the annual value of good policy in that set. These dollar values add up in the obvious way across sets.

My idea has two levels. The first level just tries to create good policy justifications for fixed policies, while the second allows policies to be changed to get better justifications. Let’s start with the first level.

Let the author of the current justification for a policy be continually paid X% (1%?) of the estimated value of its policy set (minus the values for subset policies). At anytime, anyone can challenge that author in a justification court, by offering an alternate justification, and by paying for a jury trial. At the trial, both sides can make arguments beyond the texts of their justifications. If jurors decide that this is a better justification, then that becomes the official justification, and its author now gets paid for its value.

When policies change, policy makers either offer a justification, or its an empty one that should be easy to beat by a challenger. To allow better targeting of justifications, I’d let challengers offer a justification for a strict subset of an existing policy. If the jury likes it, that would become the official justification for that subset, and its author would be paid the value for that subset. To help deal with inconsistency, I’d also let challengers offer a new justification to replace (the union of) any set of existing justifications. The challenger can argue that these different prior justifications are inconsistent, and that this new justification is overall more coherent, and better.

At least that’s my simple first-cut design. I can imagine doing better via betting markets on who would win if a jury were invoked. I can also imagine starting with a small jury and then moving to larger juries only after small jury wins, and only changing the policy justification after a large enough jury win. But for now these issues seem like distractions from our main ones concerns, so I’ll set them aside for now.

My first level proposal seems to create incentives to make policy justifications that ordinary people would accept. At least to the extent that there’s any reasonable way to justify such policies. But what if the policies are just stupid and incoherent, or at least seem so to ordinary people?

My second level tries to address this by moving further in the direction of governance by jury. Now allow challengers to also specify new policies, as well as new justifications for those new policies. As with my first level proposal, they can do this not only for particular existing policies, but also for sets of such policies, and for strict subsets. Juries are now empowered to approve such new policies along with their justifications.

As before, we might be able to improve on this via betting markets, small then larger juries, etc., but as before it seems premature to go into those details. For now, the main question must be: does this whole approach make sense? Would it be good to make policy more coherent and justified, in the eyes of ordinary citizens, even if this may come at the expense of making it less coherent and justified in the eyes of elites who might otherwise decide such things? To whom exactly should policy seem justified, if anyone?

GD Star Rating
loading...
Tagged as: ,

Decision Theory Remains Neglected

Back in ’84, when I first started to work at Lockheed Missiles & Space Company, I recall a manager complaining that their US government customer would not accept using decision theory to estimate the optimal thickness of missile walls; they insisted instead on using a crude heuristic expressed in terms of standard deviations of noise. Complex decision theory methods were okay to use for more detailed choices, but not for the biggest ones.

In his excellent 2010 book How to Measure Anything, Douglas W. Hubbard reports that this pattern is common:

Many organizations employ fairly sophisticated risk analysis methods on particular problems; … But those very same organizations do not routinely apply those same sophisticated risk analysis methods to much bigger decisions with more uncertainty and more potential for loss. …

If an organization uses quantitative risk analysis at all, it is usually for routine operational decisions. The largest, most risky decisions get the least amount of proper risk analysis. … Almost all of the most sophisticated risk analysis is applied to less risky operational decisions while the riskiest decisions—mergers, IT portfolios, big research and development initiatives, and the like—receive virtually none.

In fact, while standard decision theory has long been extremely well understood and accepted by academics, most orgs find a wide array of excuses to avoid using it to make key decisions:

For many decision makers, it is simply a habit to default to labeling something as intangible [=unmeasurable] … committees were categorically rejecting any investment where the benefits were “soft.” … In some cases decision makers effectively treat this alleged intangible as a “must have” … I have known managers who simply presume the superiority of their intuition over any quantitative model …

What they seem to take away from these experiences is that to use the methods from statistics one needs a lot of data, that the precise equations don’t deal with messy real-world decisions where we don’t have all of the data, or that one needs a PhD in statistics to use any statistics at all. … I have at times heard that “more advanced” measurements like controlled experiments should be avoided because upper management won’t understand them. … they opt not to engage in a smaller study—even though the costs might be very reasonable—because such a study would have more error than a larger one. …

Measurements can even be perceived as “dehumanizing” an issue. There is often a sense of righteous indignation when someone attempts to measure touchy topics, such as the value of an endangered species or even a human life. … has spent much time refuting objections he encounters—like the alleged “ethical” concerns of “treating a patient like a number” or that statistics aren’t “holistic” enough or the belief that their years of experience are preferable to simple statistical abstractions. … I’ve heard the same objections—sometimes word-for-word—from some managers and policy makers. …

There is a tendency among professionals in every field to perceive their field as unique in terms of the burden of uncertainty. The conversation generally goes something like this: “Unlike other industries, in our industry every problem is unique and unpredictable,” or “Problems in my field have too many factors to allow for quantification,” and so on. …

Resistance to valuing a human life may be part of a fear of numbers in general. Perhaps for these people, a show of righteous indignation is part of a defense mechanism. Perhaps they feel their “innumeracy” doesn’t matter as much if quantification itself is unimportant, or even offensive, especially on issues like these.

Apparently most for-profit firms could make substantially more profits if only they’d use simple decision theory to analyze key decisions. Execs’ usual excuse is that key parameters are unmeasurable, but Hubbard argues convincingly that this is just not true. He suggests that execs seek to excuse poor math abilities, but that seems implausible as an explanation to me.

I say that their motives are more political: execs and their allies gain more by using other more flexible decision making frameworks for key decisions, frameworks with more wiggle room to help them justify whatever decision happens to favor them politically. Decision theory, in contrast, threatens to more strongly recommend a particular hard-to-predict decision in each case. As execs gain when the orgs under them are more efficient, they don’t mind decision theory being used down there. But they don’t want it up at their level and above, for decisions that say if they and their allies win or lose.

I think I saw the same sort of effect when trying to get firms to consider prediction markets; those were okay for small decisions, but for big ones they preferred estimates made by more flexible methods. This overall view is, I think, also strongly supported by the excellent book Moral Mazes by Robert Jackall, which goes into great detail on the many ways that execs play political games while pretending to promote overall org efficiency.

If I ever did a book on The Elephant At The Office: Hidden Motives At Work, this would be a chapter.

Below the fold are many quotes from How to Measure Anything:

Continue reading "Decision Theory Remains Neglected" »

GD Star Rating
loading...
Tagged as: , ,

Socialism Via Futarchy

On Bryan’s recommendation, I just read Niemietz’s Socialism: The Failed Idea That Never Dies, which credibly argues that two dozen socialism experiments over the last century have consistently failed, with roughly this pattern:

The not-real-socialism defence is only ever invoked retrospectively, namely, when a socialist experiment has already been widely discredited. As long as a socialist experiment is in its prime, almost nobody disputes its socialist credentials. On the contrary: practically all socialist regimes have gone through honeymoon periods, during which they were enthusiastically praised and held up as role models by plenty of prominent Western intellectuals. (More)

Noteworthy results from the latest experiment:

The number of worker-run cooperatives increased from fewer than 1,000 when Chávez was first elected to well over 30,000 in less than a decade. By the end of Chávez’s second term, cooperatives accounted for about 8% of Venezuela’s GDP and 14% of its workforce … It soon became clear … that many cooperatives were behaving like capitalist enterprises, seeking to maximize their net revenue … For example, rather than supplying their products to local markets … export them to other countries where they can sell them at higher prices … Also, many cooperatives have refrained from accepting new members. … As Chávez himself said: … if we are 20 in a cooperative, we are going to work for the benefit of us 20, and that is merely capitalism. Cooperatives need to be impelled towards socialism.’ (More)

Even after so many very expensive experiments, they still apparently have only have the vaguest idea of what detailed arrangements might actually achieve what they want. It seems they have mainly waited until an allied group gained control somewhere, and then tried a few random variations that resonate with local supporters.

There still seems to be great passion in the world for further socialism experiments, but it seems hard to hold much hope if they continue with this pattern. While I’m not personally very inspired by the socialist vision, I do like for people to get what they want, and that includes people who want socialism. So I’m taking the time to think about how to help them get it.

Which induces me to consider variations on futarchy to help to achieve socialism. If you recall, futarchy is a form of governance wherein market speculators choose policies to maximize an ex-post-measured welfare measure. The thicker are these markets (perhaps via subsidies), the stronger are the incentives for speculators to learn what is actually effective in achieving that welfare. This seems a good match, if what socialism most needs now is less a good system and more a good learning environment in which to search for good systems.

The big question for futarchy-based socialism is: what are the ex-post-measurable outcomes that indicate a successful socialism? That is, how would you know one when you saw it? Obviously you’d want to include some basic consumption measures, like G.D.P., but if that’s all you maximize there’s no obvious reason why the result will be especially socialist. You might include risk-aversion over consumption, which punishes inequality to some degree, but again it isn’t obvious that risk-aversion greatly favors socialism. Even more directly and strong punishing inequality and emphasizing the poor doesn’t obviously favor any more socialism than we see in high-redistribution low-regulation capitalist Nordic “social democracies”.

Consider:

Socialism is … characterised by social ownership of the means of production and workers’ self-management of enterprise … Social ownership can be public, collective or cooperative ownership, or citizen ownership of equity. (More)

What all socialism has in common … is … bottom-up governance of society based on local assemblies which elect delegates that share their peoples’ living conditions, can be overridden, answer to and are replaceable by them, who can federate into councils and repeat the process for larger areas and amounts of people. (More; see also)

It seems that to many a central concept of socialism is each person having a high a degree control (also called “ownership”) over their world, including both their immediate world and the larger economic/political world. This is not just control to enable one to achieve high consumption, but also control over one’s workplace, and probably even more control than is required for these purposes. In this view, successful socialism is a world of busybodies with strong abilities to get into each others’ business.

To promote socialism then, we might try a futarchy whose welfare measure includes not just measures of consumption, but also of control.

For example, one measure of control would ask random people to try to induce particular random changes in their world. The stronger the correlation between actual changes afterward and the changes that we randomly assigned them, the more we’d say that people in this world had a lot of control over it. But we’d need to find some widely-accepted weights that say which possible changes count for how much, and we’d need ways to get people to actually try to change their world in the ways we assign them. These seems hard to achieve. Also, this would probably find near zero control for larger social structures, no matter how things are arranged. And we’d need to find ways to prevent this world from suddenly becoming more plastic to support test changes, while less supporting non-test changes.

Also, I worry that simple-minded measures of individual control might induce many decisions to be made via big xor trees. Such trees would seem to let anyone who controls inputs to any leaf of the tree determine the root as well. Though of course in practice not being able to predict the other inputs means you can’t actually usefully control the output. But can we formally define average individual control in a way that doesn’t promote such xor trees?

Probably the simplest solution is to just survey people about their sense of control over their world. You might want to emphasize people who’ve recently visited other worlds, so they can reasonably compare their world to others. And you’d want to limit the abilities of local authorities to force people to give desired survey answers, such as via the threat of retaliation. If a strong central government were part of a socialist society, that may also make it difficult to measure consumption. Such governments have been known to try to distort consumption stats to make themselves look good.

One solution to these problems would be to rely on capitalist foreigners, and on travel to visit them, for both market speculators and welfare measurement.

That is, let random citizens (perhaps whole families) of the socialist society be extracted periodically and made to visit a capitalist foreign land. During that foreign visit, they can be privately interviewed about both their sense of control and their consumption levels, and they can be offered the chance to stay in that foreign land. (Via offers with varying degrees of attractiveness.) Stats on what they said and on who chose to stay could then be used to estimate the welfare of that society, without allowing that socialist government to retaliate via knowing who said what. Foreign speculators could also pay to talk privately to these visitors, to help inform their market speculation choices.

In this scenario, this socialist society would, to help it more quickly learn what works best, commit to delegating to these capitalist foreigners the measurement of its welfare and substantial participation in their speculative governance markets. Of course people at home within this socialist society could also be allowed to speculate in these markets, and to contribute to stats read by foreigners. But this approach avoids extreme corruption problems by making sure that foreigners can speculate, and measure welfare, in ways that are outside of the control of a perhaps powerful socialist government.

Of course if this approach eventually settled on a stable solution for making a good socialist society, they might want to drop this external futarchy run by foreigners to become entirely self-governing. That would make sense if and when full self-governance became more important than faster learning about how to make socialism work.

And that’s as far as I’ve thought for now. Of course if sufficient interest were expressed in this concept, I could put in some more thought.

GD Star Rating
loading...
Tagged as: , ,