Yay Stability Rents

Six years ago I posted on the idea of using combinatorial auctions as a substitute for zoning. Since then, news on how badly zoning has been messing up our economy has only gotten worse. I included the zoning combo auction idea in my book The Age of Em, I’ve continued to think about the idea, and last week I talked about it to several LA-based experts in combinatorial auctions.

I’ve been pondering one key design problem, and the solution I’ve been playing with is similar to a solution that also seems to help with patents. I asked Alex Tabarrok, whose office is next door, if he knew of any general discussion of such things, and he pointed me to a long (110 page) 2016 paper called “Property is another name for monopoly” by Eric Posner and Glen Weyl. (See also this technical paper.) And that turned out to be a relatively general argument for using the specific mechanism that I was considering using in zoning combo auctions, get this, as a new standard kind of property right for most everything! Looking for web discussion, I find a few critical responses, and one excellent 2014 Interfuildity post on the basic idea. In this post I’ll go over the basic idea and some of its issues, including two that Posner and Weyl didn’t consider. Continue reading "Yay Stability Rents" »

GD Star Rating
Tagged as: , ,

Markets That Explain, Via Markets To Pick A Best

I recently heard someone say “A disadvantage of prediction markets is that they don’t explain their estimates.” I responded: “But why couldn’t they?” That feature may cost you more, and it hasn’t been explored much in research or development. But I can see how to do it; in this post, I’ll outline a concept.

Previously, I’ve worked on a type of combinatorial prediction market built on a Bayes-Net structure. And there are standard ways to use such a structure to “explain” the estimates of any one variable in terms of the estimates of other variables. So obviously one could just apply those methods directly to get explanations for particular estimates in Bayes-Net based prediction markets. But I suspect that many would see such explanations as inadequate.

Here I’m instead going to try to solve the explanation problem by solving a more general problem: how to cheaply get a single good thing, if you have access to many people willing to submit and evaluate distinguishable things, and you have access to at least one possibly expensive judge who can rank these things. With access to this general pick-a-best mechanism, you can just ask people to submit explanations of a market estimate, and then deliver a single best explanation that you expect to be rated highly by your judge.

In more detail, you need five things:

  1. a prize Z you can pay to whomever submits the winning item,
  2. a community of people willing to submit candidate items to be evaluated for this prize, and to post bonds in the amount B supporting their submissions,
  3. an expensive (cost J) and trustworthy “gold standard” judge who has an error-prone tendency to pick the “better” item out of two items submitted.
  4. a community of people who think that they can guess on average how the judge will rate items, with some of these people being right about this belief, and
  5. a costly (amount B) and only mildly error-prone way to decide if one submission is overly derivative of another.

With these five things, you can get a pretty good thing if you pay Z+J. The more Z you offer, the better will be your good thing. Here is the procedure. First, anyone in a large community may submit candidates c, if they post a bond B for each submission. Each candidate c is publicly posted as it becomes available.

A prediction market is open on all candidates submitted so far, with assets of the form “Pays $1 if c wins.” We somehow define prices pc for such assets which satisfy 1 = pY + Sumc pc, where pY is the price of the asset “The winner is not yet submitted.” Submissions are not accepted after some deadline, and at that point I recommend the candidate c with the highest price pc; that will be a good choice. But to make it a good choice, the procedure has to continue.

A time is chosen randomly from a final time window (such as a final day) after the deadline. We use the market prices pc at that random time to pick a pair of candidates to show the judge. We draw twice randomly (with replacement) using the price pc as the random chance of picking each c. The judge then picks a single tentative winning candidate w out of this pair.

Anyone who submitted a candidate before w can challenge it within a limited challenge time window, claiming that the tentative winner w is overly derivative of their earlier submission e. An amount B is then spent to judge if w is derivative of e. If w is not judged derivative, then the challenger forfeits their bond B, and w remains the tentative winner. If w is judged derivative, then the tentative winner forfeits their bond B, and the challenger becomes a new tentative winner. We need potential challengers to expect a less than B/Z chance of a mistaken judgement regarding something being derivative.

Once all challenges are resolved, the tentative winner becomes the official winner, the person who submitted it is given a large prize Z, and prediction market betting assets are paid off. The end.

This process can easily be generalized in many ways. There could be more than one judge, each judge could be given more than two items to rank, the prediction markets could be subsidized, the chances of picking candidates c to show judges might be non-linear in market prices pc, and when setting such chances prices could be averaged over a time period. If pY is not zero when choosing candidates to evaluate, the prices pc could be renormalized. We might add prediction markets in whether any given challenge would be successful, and allow submissions to be withdrawn before a formal challenge is made.

Now I haven’t proven a theorem to you that this all works well, but I’m pretty sure that it does. By offering a prize for submissions, and allowing bets on which submissions will win, you need only make one expensive judgement between a pair of items, and have access to an expensive way to decide if one submission is overly derivative of another.

I suspect this mechanism may be superior to many of the systems we now use to choose winners. Many existing systems frequently invoke low quality judges, instead of less frequently invoking higher quality judges. I suspect that market estimates of high quality judgements may often be better than direct application of low quality judgements.

GD Star Rating
Tagged as:

A Wonk’s First Question

Imagine you are considering a career as a paid policy wonk. You wonder which policy area to work in, and which institutions to affiliate with. If you want to influence actual policy, rather than just enjoying money and status, your should ask yourself: do those who sponsor or cite efforts in this area do so more to get support for pre-determined conclusions, or more to get info to help them make choices?

Sometimes sponsors and other consumers of a type of policy analysis know what policies they prefer, but seek prestigious cover to help them do it. So they pay and search for policy analyses in the hope of finding the support they need. With enough policy analyses to choose from, they can find ones to support their predetermined conclusions. But they need these prestigious cover analyses to appear, at least to distant observers, to be honest open attempts at discovery. It can’t be obvious that they were designed to support pre-determined conclusions.

At other times, however, sponsors and consumers are actually uncertain, and seek analyses with unpredictable-to-them conclusions to influence their choices. And these are the only cases where your being a policy analyst has a chance of changing real policy outcomes. Such audiences may see your analysis, or be influenced by someone else who has seen them. So for each analysis that you might produce, you should wonder: what are my chances of influencing such an open-minded chooser?

Here are a few clues to consider:

  1. How predictable are the policy conclusions of the most popular policy analysts in this area? High predictability suggests that sponsors reward such consistency, as it aids their efforts to collect support for predetermined conclusions.
  2. How interested are sponsors and other policy consumers in policy analyses done by very prestigious people and institutions, relative to others? The more open they are to products of low prestige analysts, the better the chance they seek information, instead of just prestigious backing for pre-existing conclusions.
  3. How open is this area to funding and otherwise supporting large relevant experiments (or prediction markets)? Or to applying a strong standard theory with standard assumptions, which together often imply specific conclusions? The more that people are willing to endorse the policy implications of such things before their results become known, the more open that area is to unpredictable new information.

It should be possible to collect evidence on how these factors vary across policy areas. Perhaps a simple survey would be sufficient. Might that publicly reveal to all the relative sincerity of different kinds of sponsors and consumers of policy analysis?

GD Star Rating
Tagged as: ,

Reply to Christiano on AI Risk

Paul Christiano was one of those who encouraged me to respond to non-foom AI risk concerns. Here I respond to two of his posts he directed me to. The first one says we should worry about the following scenario:

Imagine using [reinforcement learning] to implement a decentralized autonomous organization (DAO) which maximizes its profit. .. to outcompete human organizations at a wide range of tasks — producing and selling cheaper widgets, but also influencing government policy, extorting/manipulating other actors, and so on.

The shareholders of such a DAO may be able to capture the value it creates as long as they are able to retain effective control over its computing hardware / reward signal. Similarly, as long as such DAOs are weak enough to be effectively governed by existing laws and institutions, they are likely to benefit humanity even if they reinvest all of their profits.

But as AI improves, these DAOs would become much more powerful than their human owners or law enforcement. And we have no ready way to use a prosaic AGI to actually represent the shareholder’s interests, or to govern a world dominated by superhuman DAOs. In general, we have no way to use RL to actually interpret and implement human wishes, rather than to optimize some concrete and easily-calculated reward signal. I feel pessimistic about human prospects in such a world. (more)

In a typical non-foom world, if one DAO has advanced abilities, then most other organizations, including government and the law, have similar abilities. So such DAOs shouldn’t find it much easier to evade contracts or regulation than do organizations today. Thus humans can be okay if law and government still respect human property rights or political representation. Sure it might be hard to trust such a DAO to manage your charity, if you don’t trust it to judge who is in most need. But you might trust it much to give you financial returns on your financial investments in it.

Paul Christiano’s second post suggests that the arrival of AI arrives will forever lock in the distribution of patient values at that time:

The distribution of wealth in the world 1000 years ago appears to have had a relatively small effect—or more precisely an unpredictable effect, whose expected value was small ex ante—on the world of today. I think there is a good chance that AI will fundamentally change this dynamic, and that the distribution of resources shortly after the arrival of human-level AI may have very long-lasting consequences. ..

Whichever values were most influential at one time would remain most influential (in expectation) across all future times. .. The great majority of resources are held by extremely patient values. .. The development of machine intelligence may move the world much closer to this naïve model. .. [Because] the values of machine intelligences can (probably, eventually) be directly determined by their owners or predecessors. .. it may simply be possible to design a machine intelligence who exactly shares their predecessor’s values and who can serve as a manager. .. the arrival of machine intelligence may lead to a substantial crystallization of influence .. an event with long-lasting consequences. (more)

That is, Christiano says future AI won’t have problems preserving its values over time, nor need it pay agency costs to manage subsystems. Relatedly, Christiano elsewhere claims that future AI systems won’t have problems with design entrenchment:

Over the next 100 years greatly exceeds total output over all of history. I agree that coordination is hard, but even spending a small fraction of current effort on exploring novel redesigns would be enough to quickly catch up with stuff designed in the past.

A related claim, that Christiano supports to some degree, is that future AI are smart enough to avoid suffers from coordination failures. They may even use “acasual trade” to coordinate when physical interaction of any sort is impossible!

In our world, more competent social and technical systems tend to be larger and more complex, and such systems tend to suffer more (in % cost terms) from issues of design entrenchment, coordination failures, agency costs, and preserving values over time. In larger complex systems, it becomes harder to isolate small parts that encode “values”; a great many diverse parts end up influencing what such systems do in any given situation.

Yet Christiano expects the opposite for future AI; why? I fear his expectations result more from far view idealizations than from observed trends in real systems. In general, we see things far away in less detail, and draw inferences about them more from top level features and analogies than from internal detail. Yet even though we know less about such things, we are more confident in our inferences! The claims above seem to follow from the simple abstract description that future AI is “very smart”, and thus better in every imaginable way. This is reminiscent of medieval analysis that drew so many conclusions about God (including his existence) from the “fact” that he is “perfect.”

But even if values will lock in when AI arrives, and then stay locked, that still doesn’t justify great efforts to study AI control today, at least relative to the other options of improving our control mechanisms in general, or saving resources now to spend later, either on studying AI control problems when we know more about AI, or just to buy influence over the future when that comes up for sale.

GD Star Rating
Tagged as: , , ,

An Outside View of AI Control

I’ve written much on my skepticism of local AI foom (= intelligence explosion). Recently I said that foom offers the main justification I understand for AI risk efforts now, as well as being the main choice of my Twitter followers in a survey. It was the main argument offered by Eliezer Yudkowsky in our debates here at this blog, by Nick Bostrom in his book Superintelligence, and by Max Tegmark in his recent book Life 3.0 (though he denied so in his reply here).

However, some privately complained to me that I haven’t addressed those with non-foom-based AI concerns. So in this post I’ll consider AI control in the context of a prototypical non-em non-foom mostly-peaceful outside-view AI scenario. In a future post, I’ll try to connect this to specific posts by others on AI risk.

An AI scenario is where software does most all jobs; humans may work for fun, but they add little value. In a non-em scenario, ems are never feasible. As foom scenarios are driven by AI innovations that are very lumpy in time and organization, in non-foom scenarios innovation lumpiness is distributed more like it is in our world. In a mostly-peaceful scenario, peaceful technologies of production matter much more than do technologies of war and theft. And as an outside view guesses that future events are like similar past events, I’ll relate future AI control problems to similar past problems. Continue reading "An Outside View of AI Control" »

GD Star Rating
Tagged as: , ,

Dealism, Futarchy, and Hypocrisy

Many people analyze and discuss the policies that might be chosen by organizations such as governments, charities, clubs, and firms. We economists have a standard set of tools to help with such analysis, and in many contexts a good economist can use such tools to recommend particular policy options. However, many have criticized these economic tools as representing overly naive and simplistic theories of morality. In response I’ve said: policy conversations don’t have to be about morality. Let me explain.

A great many people presume that policy conversations are of course mainly about what actions and outcomes are morally better; which actions do we most admire and approve of ethically? If you accept this framing, and if you see human morality as complex, then it is reasonable to be wary of mathematical frameworks for policy analysis; any analysis of morality simple enough to be put into math could lead to quite misleading conclusions. One can point to many factors, given little attention by economists, but which are often considered relevant for moral analysis.

However, we don’t have to see policy conversations as being mainly about morality. We can instead look at them as being more about people trying to get what they want, and using shared advisors to help. We economists make great use of the concept of “revealed preference”; we infer what people want from what they do, and we expect people to continue to act to get what they want. Part of what people want is to be moral, and to be seen as moral. But people also want other things, and sometimes they make tradeoffs, choosing to get less morality and more of these other things. Continue reading "Dealism, Futarchy, and Hypocrisy" »

GD Star Rating
Tagged as: , ,

Harnessing Polarization

Human status competition can be wasteful. For example, often many athletes all work hard to win a contest, yet if they had all worked only half as hard, the best one could still have won. Many human societies, however, have found ways to channel status efforts into more useful directions, by awarding high status for types of effort of which there might otherwise be too little. For example, societies have given status to successful peace-makers, explorers, and innovators.

Relative to history and the world, the US today has unusual high levels of political polarization. A great deal of effort is going into people showing loyalty to their side and dissing the opposing side. Which leads me to wonder: could we harness all this political energy for something more useful?

Traditionally in a two party system, each party competes for the endorsement of marginal undecided voters, and so partisans can be enticed to work to produce better outcomes when their party is in power. But random variation in context makes it harder to see partisan quality from outcomes. And in a hyper partisan world, there aren’t many undecided voters left to impress.

Perhaps we could create more clear and direct contests, where the two political sides could compete to do something good. For example, divide Detroit or Puerto Rico into two dozen regions, give each side the same financial budget, political power, and a random half of the regions to manage. Then let us see which side creates better regions.

Political decision markets might also create more clear and direct contests. It is hard to control for local random factors in making statistical comparisons of polities governed by different sides. But market estimates of polity outcomes conditional on who is elected should correct for most local context, leaving a clearer signal of who is better.

These are just two ideas off the top of my head; who can find more ways that we might harness political polarization energy?

Added 28Sep: Notice that these contests don’t have to actually be fair. They just have to induce high efforts to win them. For that, merely believing that others may see them as fair could be enough.

GD Star Rating
Tagged as: ,

City Travel Scaling

Here’s a fascinating factoid I found in Geoffrey West’s new book Scale:

An extremely simple but very powerful mathematical result for the movement of people in cities. .. Consider any location in a city. .. predicts how many people visit this location from any distance away and how often they do it. .. It states that the number of visitors should scale inversely as the square of both the distance traveled and the frequency of visitation. ..

Suppose that on average 1600 people visit the area around Park Street, Boston from four kilometers away once a month. .. only 400 people visit Park street from 8 kilometers away once a month. .. how many people visit Park street from four kilometers away but now with a greater frequency of twice a month. .. also … 400 people. (pp.347-9)

As cities are basically two-dimensional in space and one-dimensional in time, this implies that most visits to a place are by people who live nearby (not so surprising), and also by people who visit very infrequently (quite surprising). I’d love to see an urban econ model embodying this pattern. Alas West cites “Markus Schlapfer and Michael Szell”, but no publication, nor could I find one online.

The book Scale is on an important yet neglected topic: basic patterns in large systems such as organisms, ecosystems, cities, and firms. Alas West rambles, in part to avoid talking math directly, so you have to skim past many words to get to the key patterns; I bet I could have described them all in ten pages. But they are indeed important patterns.

I found myself distrusting West’s theories for explaining these patterns. He talks as if most of the patterns he discusses are well explained, mostly by papers he’s written, and he doesn’t engage or mention competing theories. But I’ve heard that many disagree with his theories. In particular, though West claims that a 3/4 power law of organism metabolism versus mass is explained by piping constraints (West offers different theories for trees and for animals with pumped blood), while researching Age of Em I learned:

Does our ability to cool cities fall inversely with city scale? Actually, no. We have good fractal pipe designs to efficiently import fluids like air or water from outside a city to near every point in that city, and to then export hot fluids from near every point to outside the city. These fractal designs require cost overheads that are only logarithmic in the total size of the city.

So if this metabolism pattern is due to piping constraints, it is because evolution never managed to find the more efficient piping designs that we humans now know.

GD Star Rating
Tagged as: , ,

Humans Cells In Multicellular Future Minds?

In general, adaptive systems vary along an axis from general to specific. A more general system works better (either directly or after further adaptation) in a wider range of environments, and also with a wider range of other adapting systems. It does this in part via having more useful modularity and abstraction. In contrast, a more specific system adapts to a narrower range of specific environments and other subsystems.

Systems that we humans consciously design tend to be more general, i.e., less context dependent, relative to the “organic” systems that they often replace. For example, compare grid-like city street plans to locally evolved city streets, national retail outlets to locally arising stores and restaurants, traditional to permaculture farms, hotel rooms to private homes, big formal firms to small informal teams, uniforms to individually-chosen clothes, and refactored to un-refactored software. The first entity in each pair tends to more easily scale and to match more environments, while the second in each pair tends to be adapted in more detail to particular local conditions. Continue reading "Humans Cells In Multicellular Future Minds?" »

GD Star Rating
Tagged as: ,

Prediction Markets Update

Prediction markets continue to offer great potential to improve society at many levels. Their greatest promise lies in helping organizations to better aggregate info to enable better key decisions. However, while such markets have consistently performed well in terms of cost, accuracy, ease of use, and user satisfaction, they have also tended to be politically disruptive – they often say things that embarrass powerful people, who get them killed. It is like putting a smart autist in the C-suite, someone who has lots of valuable info but is oblivious to the firm’s political landscape. Such an executive just wouldn’t last long, no matter how much they knew.

Like most promising innovations, prediction markets can’t realize their potential until they have been honed and evaluated in a set of increasingly substantial and challenging trials. Abstract ideas must be married to the right sort of complementary details that allow them to function in specific contexts. For prediction markets, real organizations with concrete forecasting needs related to their key decisions need to experiment with different ways to field prediction markets, in search of arrangements that minimize political disruption. (If you know of an organization willing to put up with the disruption that such experimentation creates, I know of a patron willing to consider funding such experiments.)

Alas, few such experiments have been happening. So let me tell you what has been happening instead. Continue reading "Prediction Markets Update" »

GD Star Rating
Tagged as: