All Pay Liability

We could raise government revenue much more efficiently than we now do, with less damage to the economy for any given amount of revenue raised. For example, we could tax fixed characteristics like height instead of income, we could tax traffic congestion a lot more, and we could do better at taxing pollution, including carbon. Recently I posted on a more efficient system of property taxes, that allows more revenue to be raised at a lower cost. Today, I’ll post on a more efficient system of accident liability, which similarly raises more revenue at a lower cost.

Some don’t want me to talk about these things. They hope to “starve the beast” by drying up government revenue sources. That seems to me a lost cause, the sort of logic that pushed radicals toward generic destruction, hoping that eventually the masses would get fed up and revolt. I instead expect a better world overall if governments adopt more efficient policies, including more efficient tax policies.

Regarding accident liability, we want a system that will encourage good levels of care and activity by all who can influence accident rates. For example, regarding car accidents we want drivers to pick good car models, speeds, sleep, and maintenance frequencies. We also want them to take into account the possibility of hurting others via accidents when they choose how often they drive. In addition, we want a system that induces fewer actual court cases, which are expensive, and that asks courts to make fewer judgements, in which they might err.

The simplest system is no liability. Courts just don’t get involved. This has the lowest possible rate of court cases, namely zero. It creates good incentives for accident victims to set their care and activity levels well, but gives rather poor incentives for others to set such things well.

The next simplest system is strict liability. This induces good care and activity by potential injurers, but not from potential victims. It also induces a high rate of court cases; nearly every accident results in a lawsuit. While the parties might settle out of court, if a case goes to trial the court must determine responsibility, i.e., who caused the accident, and how much damages the victim suffered as a result.

Relative to strict liability, systems of negligence cut the rate of court cases, but at the cost of asking courts to make more judgements. As with strict liability, courts must judge who is responsible and victim damage levels. But in addition, courts must also ask themselves if that injurer took enough care to prevent the accident. For each of visible parameter, the courts must judge both the actual level of care taken, and the optimal level of care.If the injurer took enough care overall, that injurer does not owe damages. And if that no damages situation is the usual case, there are fewer court cases, as there are fewer lawsuits.

In practice, however, courts can only look at a small number of injurer choice parameters visible enough to them, such as driving speed. Far more parameters, including all injurer activity level parameters, remain invisible, and so are not considered. Negligence doesn’t create good incentives to set all those less visible parameters.

There are standard variations on these systems, such as allowing contributory negligence on the part of the victim. But all of these systems fail to induce optimal levels of care and activity in someone. We have long known, however, of a simple system that gets pretty much all of these things right, and in addition only asks courts to judge who is responsible for an accident and victim damage levels. (I didn’t invent this system; it is mentioned in many law & econ texts.) In this simple system, courts do not need to consider anyone’s actual or ideal levels of care or activity.

This simple system is to make all responsible parties pay the damage levels of all other parties hurt by the accident. The trick is that they pay all of these amounts to the government, instead of to each other. As each party now internalizes all of the damage suffered by all of the parties, they should choose all their private care and activity levels well. And the government gets more revenue to boot.

The big problem with this all-pay liability system is that none of these responsible parties, including the victims, want to report this accident to the government. They’d all rather pretend it didn’t happen. So the government needs some other way to find out about accidents. In dense areas where they government already has access to mass surveillance systems, they can just use those systems. In other areas, governments might offer bounties to third parties who report accidents, and put strong penalties on those who fail to report their own accidents. Or the system might revert to other liability rules in contexts where governments might otherwise detect accidents too infrequently.

With all-pay liability, we expect a lawsuit for every accident. But in that suit the courts only need to judge who is responsible and victim damage levels. No other judgements need be made. So if we could find simple streamlined ways to make these judgements, this system might not be that expensive to administer. And then we’d have both better accident prevention and more available government revenue.

(Yes, people might want to buy insurance against the risk of making these payments. Yes, if multiple parties could coordinate to prevent accidents together, this system might induce them to spend too much on prevention. Hopefully we could identify such efforts and treat them differently.)

GD Star Rating
a WordPress rating system
Tagged as: ,

Hypocrisy As Key To Class

Two examples of how a key to achieving higher social class is to learn the right kinds of hypocrisy:

Working-class students are more likely to enter college with the notion that the purpose of higher education is learning in the classroom and invest their time and energies accordingly. … This type of academically focused script clashes with the “party” and social cultures of many US colleges. It isolates working and lower middle-class students from peer networks that can provide them with valuable information about how to navigate the social landscape of college as well as future job opportunities. The resulting feelings of isolation and alienation adversely affect these students’ grades, levels of happiness, and likelihood of graduation. … [This] also adversely affects their job prospects. (p.13 Pedigree: How Elite Students Get Elite Jobs)

“There is this automatic assumption in any legal environment that Asians will have a particular talent for bitter labor. … There was this weird self-selection where the Asians would migrate toward the most brutal part of the labor.” By contrast, the white lawyers he encountered had a knack for portraying themselves as above all that. “White people have this instinct that is really important: to give off the impression that they’re only going to do the really important work. You’re a quarterback. It’s a kind of arrogance that Asians are trained not to have.

Someone told me not long after I moved to New York that in order to succeed, you have to understand which rules you’re supposed to break. If you break the wrong rules, you’re finished. And so the easiest thing to do is follow all the rules. But then you consign yourself to a lower status. The real trick is understanding what rules are not meant for you.” This idea of a kind of rule-governed rule-breaking—where the rule book was unwritten but passed along in an innate cultural sense—is perhaps the best explanation I have heard of how the Bamboo Ceiling functions in practice. (more)

GD Star Rating
a WordPress rating system
Tagged as: ,

Yay Stability Rents

Six years ago I posted on the idea of using combinatorial auctions as a substitute for zoning. Since then, news on how badly zoning has been messing up our economy has only gotten worse. I included the zoning combo auction idea in my book The Age of Em, I’ve continued to think about the idea, and last week I talked about it to several LA-based experts in combinatorial auctions.

I’ve been pondering one key design problem, and the solution I’ve been playing with is similar to a solution that also seems to help with patents. I asked Alex Tabarrok, whose office is next door, if he knew of any general discussion of such things, and he pointed me to a long (110 page) 2016 paper called “Property is another name for monopoly” by Eric Posner and Glen Weyl. (See also this technical paper.) And that turned out to be a relatively general argument for using the specific mechanism that I was considering using in zoning combo auctions, get this, as a new standard kind of property right for most everything! Looking for web discussion, I find a few critical responses, and one excellent 2014 Interfuildity post on the basic idea. In this post I’ll go over the basic idea and some of its issues, including two that Posner and Weyl didn’t consider. Continue reading "Yay Stability Rents" »

GD Star Rating
a WordPress rating system
Tagged as: , ,

Markets That Explain, Via Markets To Pick A Best

I recently heard someone say “A disadvantage of prediction markets is that they don’t explain their estimates.” I responded: “But why couldn’t they?” That feature may cost you more, and it hasn’t been explored much in research or development. But I can see how to do it; in this post, I’ll outline a concept.

Previously, I’ve worked on a type of combinatorial prediction market built on a Bayes-Net structure. And there are standard ways to use such a structure to “explain” the estimates of any one variable in terms of the estimates of other variables. So obviously one could just apply those methods directly to get explanations for particular estimates in Bayes-Net based prediction markets. But I suspect that many would see such explanations as inadequate.

Here I’m instead going to try to solve the explanation problem by solving a more general problem: how to cheaply get a single good thing, if you have access to many people willing to submit and evaluate distinguishable things, and you have access to at least one possibly expensive judge who can rank these things. With access to this general pick-a-best mechanism, you can just ask people to submit explanations of a market estimate, and then deliver a single best explanation that you expect to be rated highly by your judge.

In more detail, you need five things:

  1. a prize Z you can pay to whomever submits the winning item,
  2. a community of people willing to submit candidate items to be evaluated for this prize, and to post bonds in the amount B supporting their submissions,
  3. an expensive (cost J) and trustworthy “gold standard” judge who has an error-prone tendency to pick the “better” item out of two items submitted.
  4. a community of people who think that they can guess on average how the judge will rate items, with some of these people being right about this belief, and
  5. a costly (amount B) and only mildly error-prone way to decide if one submission is overly derivative of another.

With these five things, you can get a pretty good thing if you pay Z+J. The more Z you offer, the better will be your good thing. Here is the procedure. First, anyone in a large community may submit candidates c, if they post a bond B for each submission. Each candidate c is publicly posted as it becomes available.

A prediction market is open on all candidates submitted so far, with assets of the form “Pays $1 if c wins.” We somehow define prices pc for such assets which satisfy 1 = pY + Sumc pc, where pY is the price of the asset “The winner is not yet submitted.” Submissions are not accepted after some deadline, and at that point I recommend the candidate c with the highest price pc; that will be a good choice. But to make it a good choice, the procedure has to continue.

A time is chosen randomly from a final time window (such as a final day) after the deadline. We use the market prices pc at that random time to pick a pair of candidates to show the judge. We draw twice randomly (with replacement) using the price pc as the random chance of picking each c. The judge then picks a single tentative winning candidate w out of this pair.

Anyone who submitted a candidate before w can challenge it within a limited challenge time window, claiming that the tentative winner w is overly derivative of their earlier submission e. An amount B is then spent to judge if w is derivative of e. If w is not judged derivative, then the challenger forfeits their bond B, and w remains the tentative winner. If w is judged derivative, then the tentative winner forfeits their bond B, and the challenger becomes a new tentative winner. We need potential challengers to expect a less than B/Z chance of a mistaken judgement regarding something being derivative.

Once all challenges are resolved, the tentative winner becomes the official winner, the person who submitted it is given a large prize Z, and prediction market betting assets are paid off. The end.

This process can easily be generalized in many ways. There could be more than one judge, each judge could be given more than two items to rank, the prediction markets could be subsidized, the chances of picking candidates c to show judges might be non-linear in market prices pc, and when setting such chances prices could be averaged over a time period. If pY is not zero when choosing candidates to evaluate, the prices pc could be renormalized. We might add prediction markets in whether any given challenge would be successful, and allow submissions to be withdrawn before a formal challenge is made.

Now I haven’t proven a theorem to you that this all works well, but I’m pretty sure that it does. By offering a prize for submissions, and allowing bets on which submissions will win, you need only make one expensive judgement between a pair of items, and have access to an expensive way to decide if one submission is overly derivative of another.

I suspect this mechanism may be superior to many of the systems we now use to choose winners. Many existing systems frequently invoke low quality judges, instead of less frequently invoking higher quality judges. I suspect that market estimates of high quality judgements may often be better than direct application of low quality judgements.

GD Star Rating
a WordPress rating system
Tagged as:

A Wonk’s First Question

Imagine you are considering a career as a paid policy wonk. You wonder which policy area to work in, and which institutions to affiliate with. If you want to influence actual policy, rather than just enjoying money and status, your should ask yourself: do those who sponsor or cite efforts in this area do so more to get support for pre-determined conclusions, or more to get info to help them make choices?

Sometimes sponsors and other consumers of a type of policy analysis know what policies they prefer, but seek prestigious cover to help them do it. So they pay and search for policy analyses in the hope of finding the support they need. With enough policy analyses to choose from, they can find ones to support their predetermined conclusions. But they need these prestigious cover analyses to appear, at least to distant observers, to be honest open attempts at discovery. It can’t be obvious that they were designed to support pre-determined conclusions.

At other times, however, sponsors and consumers are actually uncertain, and seek analyses with unpredictable-to-them conclusions to influence their choices. And these are the only cases where your being a policy analyst has a chance of changing real policy outcomes. Such audiences may see your analysis, or be influenced by someone else who has seen them. So for each analysis that you might produce, you should wonder: what are my chances of influencing such an open-minded chooser?

Here are a few clues to consider:

  1. How predictable are the policy conclusions of the most popular policy analysts in this area? High predictability suggests that sponsors reward such consistency, as it aids their efforts to collect support for predetermined conclusions.
  2. How interested are sponsors and other policy consumers in policy analyses done by very prestigious people and institutions, relative to others? The more open they are to products of low prestige analysts, the better the chance they seek information, instead of just prestigious backing for pre-existing conclusions.
  3. How open is this area to funding and otherwise supporting large relevant experiments (or prediction markets)? Or to applying a strong standard theory with standard assumptions, which together often imply specific conclusions? The more that people are willing to endorse the policy implications of such things before their results become known, the more open that area is to unpredictable new information.

It should be possible to collect evidence on how these factors vary across policy areas. Perhaps a simple survey would be sufficient. Might that publicly reveal to all the relative sincerity of different kinds of sponsors and consumers of policy analysis?

GD Star Rating
a WordPress rating system
Tagged as: ,

Reply to Christiano on AI Risk

Paul Christiano was one of those who encouraged me to respond to non-foom AI risk concerns. Here I respond to two of his posts he directed me to. The first one says we should worry about the following scenario:

Imagine using [reinforcement learning] to implement a decentralized autonomous organization (DAO) which maximizes its profit. .. to outcompete human organizations at a wide range of tasks — producing and selling cheaper widgets, but also influencing government policy, extorting/manipulating other actors, and so on.

The shareholders of such a DAO may be able to capture the value it creates as long as they are able to retain effective control over its computing hardware / reward signal. Similarly, as long as such DAOs are weak enough to be effectively governed by existing laws and institutions, they are likely to benefit humanity even if they reinvest all of their profits.

But as AI improves, these DAOs would become much more powerful than their human owners or law enforcement. And we have no ready way to use a prosaic AGI to actually represent the shareholder’s interests, or to govern a world dominated by superhuman DAOs. In general, we have no way to use RL to actually interpret and implement human wishes, rather than to optimize some concrete and easily-calculated reward signal. I feel pessimistic about human prospects in such a world. (more)

In a typical non-foom world, if one DAO has advanced abilities, then most other organizations, including government and the law, have similar abilities. So such DAOs shouldn’t find it much easier to evade contracts or regulation than do organizations today. Thus humans can be okay if law and government still respect human property rights or political representation. Sure it might be hard to trust such a DAO to manage your charity, if you don’t trust it to judge who is in most need. But you might trust it much to give you financial returns on your financial investments in it.

Paul Christiano’s second post suggests that the arrival of AI arrives will forever lock in the distribution of patient values at that time:

The distribution of wealth in the world 1000 years ago appears to have had a relatively small effect—or more precisely an unpredictable effect, whose expected value was small ex ante—on the world of today. I think there is a good chance that AI will fundamentally change this dynamic, and that the distribution of resources shortly after the arrival of human-level AI may have very long-lasting consequences. ..

Whichever values were most influential at one time would remain most influential (in expectation) across all future times. .. The great majority of resources are held by extremely patient values. .. The development of machine intelligence may move the world much closer to this naïve model. .. [Because] the values of machine intelligences can (probably, eventually) be directly determined by their owners or predecessors. .. it may simply be possible to design a machine intelligence who exactly shares their predecessor’s values and who can serve as a manager. .. the arrival of machine intelligence may lead to a substantial crystallization of influence .. an event with long-lasting consequences. (more)

That is, Christiano says future AI won’t have problems preserving its values over time, nor need it pay agency costs to manage subsystems. Relatedly, Christiano elsewhere claims that future AI systems won’t have problems with design entrenchment:

Over the next 100 years greatly exceeds total output over all of history. I agree that coordination is hard, but even spending a small fraction of current effort on exploring novel redesigns would be enough to quickly catch up with stuff designed in the past.

A related claim, that Christiano supports to some degree, is that future AI are smart enough to avoid suffers from coordination failures. They may even use “acasual trade” to coordinate when physical interaction of any sort is impossible!

In our world, more competent social and technical systems tend to be larger and more complex, and such systems tend to suffer more (in % cost terms) from issues of design entrenchment, coordination failures, agency costs, and preserving values over time. In larger complex systems, it becomes harder to isolate small parts that encode “values”; a great many diverse parts end up influencing what such systems do in any given situation.

Yet Christiano expects the opposite for future AI; why? I fear his expectations result more from far view idealizations than from observed trends in real systems. In general, we see things far away in less detail, and draw inferences about them more from top level features and analogies than from internal detail. Yet even though we know less about such things, we are more confident in our inferences! The claims above seem to follow from the simple abstract description that future AI is “very smart”, and thus better in every imaginable way. This is reminiscent of medieval analysis that drew so many conclusions about God (including his existence) from the “fact” that he is “perfect.”

But even if values will lock in when AI arrives, and then stay locked, that still doesn’t justify great efforts to study AI control today, at least relative to the other options of improving our control mechanisms in general, or saving resources now to spend later, either on studying AI control problems when we know more about AI, or just to buy influence over the future when that comes up for sale.

GD Star Rating
a WordPress rating system
Tagged as: , , ,

An Outside View of AI Control

I’ve written much on my skepticism of local AI foom (= intelligence explosion). Recently I said that foom offers the main justification I understand for AI risk efforts now, as well as being the main choice of my Twitter followers in a survey. It was the main argument offered by Eliezer Yudkowsky in our debates here at this blog, by Nick Bostrom in his book Superintelligence, and by Max Tegmark in his recent book Life 3.0 (though he denied so in his reply here).

However, some privately complained to me that I haven’t addressed those with non-foom-based AI concerns. So in this post I’ll consider AI control in the context of a prototypical non-em non-foom mostly-peaceful outside-view AI scenario. In a future post, I’ll try to connect this to specific posts by others on AI risk.

An AI scenario is where software does most all jobs; humans may work for fun, but they add little value. In a non-em scenario, ems are never feasible. As foom scenarios are driven by AI innovations that are very lumpy in time and organization, in non-foom scenarios innovation lumpiness is distributed more like it is in our world. In a mostly-peaceful scenario, peaceful technologies of production matter much more than do technologies of war and theft. And as an outside view guesses that future events are like similar past events, I’ll relate future AI control problems to similar past problems. Continue reading "An Outside View of AI Control" »

GD Star Rating
a WordPress rating system
Tagged as: , ,

Dealism, Futarchy, and Hypocrisy

Many people analyze and discuss the policies that might be chosen by organizations such as governments, charities, clubs, and firms. We economists have a standard set of tools to help with such analysis, and in many contexts a good economist can use such tools to recommend particular policy options. However, many have criticized these economic tools as representing overly naive and simplistic theories of morality. In response I’ve said: policy conversations don’t have to be about morality. Let me explain.

A great many people presume that policy conversations are of course mainly about what actions and outcomes are morally better; which actions do we most admire and approve of ethically? If you accept this framing, and if you see human morality as complex, then it is reasonable to be wary of mathematical frameworks for policy analysis; any analysis of morality simple enough to be put into math could lead to quite misleading conclusions. One can point to many factors, given little attention by economists, but which are often considered relevant for moral analysis.

However, we don’t have to see policy conversations as being mainly about morality. We can instead look at them as being more about people trying to get what they want, and using shared advisors to help. We economists make great use of the concept of “revealed preference”; we infer what people want from what they do, and we expect people to continue to act to get what they want. Part of what people want is to be moral, and to be seen as moral. But people also want other things, and sometimes they make tradeoffs, choosing to get less morality and more of these other things. Continue reading "Dealism, Futarchy, and Hypocrisy" »

GD Star Rating
a WordPress rating system
Tagged as: , ,

Harnessing Polarization

Human status competition can be wasteful. For example, often many athletes all work hard to win a contest, yet if they had all worked only half as hard, the best one could still have won. Many human societies, however, have found ways to channel status efforts into more useful directions, by awarding high status for types of effort of which there might otherwise be too little. For example, societies have given status to successful peace-makers, explorers, and innovators.

Relative to history and the world, the US today has unusual high levels of political polarization. A great deal of effort is going into people showing loyalty to their side and dissing the opposing side. Which leads me to wonder: could we harness all this political energy for something more useful?

Traditionally in a two party system, each party competes for the endorsement of marginal undecided voters, and so partisans can be enticed to work to produce better outcomes when their party is in power. But random variation in context makes it harder to see partisan quality from outcomes. And in a hyper partisan world, there aren’t many undecided voters left to impress.

Perhaps we could create more clear and direct contests, where the two political sides could compete to do something good. For example, divide Detroit or Puerto Rico into two dozen regions, give each side the same financial budget, political power, and a random half of the regions to manage. Then let us see which side creates better regions.

Political decision markets might also create more clear and direct contests. It is hard to control for local random factors in making statistical comparisons of polities governed by different sides. But market estimates of polity outcomes conditional on who is elected should correct for most local context, leaving a clearer signal of who is better.

These are just two ideas off the top of my head; who can find more ways that we might harness political polarization energy?

Added 28Sep: Notice that these contests don’t have to actually be fair. They just have to induce high efforts to win them. For that, merely believing that others may see them as fair could be enough.

GD Star Rating
a WordPress rating system
Tagged as: ,

City Travel Scaling

Here’s a fascinating factoid I found in Geoffrey West’s new book Scale:

An extremely simple but very powerful mathematical result for the movement of people in cities. .. Consider any location in a city. .. predicts how many people visit this location from any distance away and how often they do it. .. It states that the number of visitors should scale inversely as the square of both the distance traveled and the frequency of visitation. ..

Suppose that on average 1600 people visit the area around Park Street, Boston from four kilometers away once a month. .. only 400 people visit Park street from 8 kilometers away once a month. .. how many people visit Park street from four kilometers away but now with a greater frequency of twice a month. .. also … 400 people. (pp.347-9)

As cities are basically two-dimensional in space and one-dimensional in time, this implies that most visits to a place are by people who live nearby (not so surprising), and also by people who visit very infrequently (quite surprising). I’d love to see an urban econ model embodying this pattern. Alas West cites “Markus Schlapfer and Michael Szell”, but no publication, nor could I find one online.

The book Scale is on an important yet neglected topic: basic patterns in large systems such as organisms, ecosystems, cities, and firms. Alas West rambles, in part to avoid talking math directly, so you have to skim past many words to get to the key patterns; I bet I could have described them all in ten pages. But they are indeed important patterns.

I found myself distrusting West’s theories for explaining these patterns. He talks as if most of the patterns he discusses are well explained, mostly by papers he’s written, and he doesn’t engage or mention competing theories. But I’ve heard that many disagree with his theories. In particular, though West claims that a 3/4 power law of organism metabolism versus mass is explained by piping constraints (West offers different theories for trees and for animals with pumped blood), while researching Age of Em I learned:

Does our ability to cool cities fall inversely with city scale? Actually, no. We have good fractal pipe designs to efficiently import fluids like air or water from outside a city to near every point in that city, and to then export hot fluids from near every point to outside the city. These fractal designs require cost overheads that are only logarithmic in the total size of the city.

So if this metabolism pattern is due to piping constraints, it is because evolution never managed to find the more efficient piping designs that we humans now know.

GD Star Rating
a WordPress rating system
Tagged as: , ,