Complex Impact Futures
Imagine a world of people doing various specific projects, where over the long run the net effect of all these projects is to produce some desired outcomes. These projects may interact in complex ways. To encourage people to do more and better such projects along the way, we might like a way to eventually allocate credit to these various projects for their contributions to desired outcomes.
And we might like to have good predictions of such credit estimates, available either right after project completion, so we can praise project supporters, or available before projects start, to advise on which projects to start. Such a mechanism could be applied to projects within a firm or other org re achieving that org’s goals, or to charity projects re doing various kinds of general good, or to academic projects re promoting intellectual progress. In this post, I outline a way to do all this.
First, let us assume that we have available to us “historians” who could in groups judge after the fact which of two actual projects had contributed the most to desired outcomes. (And assume a way to pay such historians to make them sufficiently honest and careful in such judgments.) These judgments might be made with noise, well after the fact, and at great expense, but are still possible. (Remember, the longer one waits to judge, the more budget one can spend on judging.)
Consider two projects that have relative strengths A and B in terms of the credit each deserves for desired outcomes. Assume further that the chance that a random group of historians will pick A over B is just A/(A+B). This linear rule is a standard assumption made for many kinds of sporting contests (e.g. chess), with contestant strengths being usually distributed log-normally. (E.g., chess “Elo rating” is proportional to a log of such a player strength estimate.)
Given these assumptions, project strength estimates can be obtained via a “tournament parimutuel” (a name I just made up). Let there be a pool of money associated with each project, where each trader who contributes to a pool gets the payoffs from that pool in proportion to their contributions.
If each project were randomly matched to another project, and random historian groups were assigned to judge each pair, then it would work to let the winning pool divide up the money from both pools, just as if there had been a simple parimutuel on that pair. Traders would then tend to set the relative amounts in each pool in proportion to the relative strengths of associated projects.
If judging were very expensive, however, then we might not be able to afford to have historians judge every project. But in that case it could work to randomize across projects. Pick sets of projects to judge, throw away the rest, and boost the amount in each retained pool by moving money from thrown-away (now boost-zero) pools into retained pools in proportion to pool size.
All you have to do is make sure that, averaged over the ways to randomly throw away projects, each project has a unit average boost. For example, you could partition the projects, and pick each partition set with a chance proportion to its pool size. With this done right, those who invest in pools should expect the same average payout as if all projects were judged, though such payouts would now have more variance.
Within a set of projects chosen for judging, any ways to pair projects to judge should work. It would make sense to pair projects with similar strength estimates, to max the info that judging gives, but beyond that we could let judges pick, and at the last minute, pairs they think easier to judge, such as projects that are close to each other in topic spaces, or similar in methods and participants. Or pairs that they would find interesting and informative to judge.
Historians might even pick random projects to judge, and then look nearby to select comparison projects, as long as they ensured a symmetric choice habit, or corrected for asymmetries. (It can also work to allow judges to sometimes say they can’t judge, or to rank more than two projects at the same time.) It would be good if the might-be-paired network of connections between projects were fully connected across all projects.
Parimutuel pools can make sense when all pool contributions are made at roughly the same time, so that contributors have similar info. But when bets will be made over longer time durations, betting markets make more sense. Thus we’d like to have a “complex impact futures” market over the various projects for most of our long duration, and then convert such bets into parimutuel tournament holdings just before judging.
We can do that by letting anyone split cash $1 into N betting assets of the form “Pays $xp into p pool” for each of N projects p, where xp refers to the market price of this asset at the time when betting assets are converted to claims in a tournament parimutuel. At that time, each outstanding asset of the form “Pays $xp into p pool” is converted into $xp put into the parimutuel pool for project p.
This method ensures that project pool amounts have the ratios xp. Note that 1 = Sump=1N xp, that a logarithmic market scoring rule would work find for trading in these markets, and that via a “rest of field” asset we don’t need to know about all projects p when the market starts.
Thus traders in our complex impact futures markets should treat prices of these assets as estimates of the relative strength of projects p in the credit judging process. They’ll want to buy projects whose relative strength seems underestimated, and sell those that seem overestimated. And so these prices right after a project is completed should give speculators’ consensus estimate on that project’s relative credit for desired outcomes. And the prices on future possible projects, conditional on the project starting, give consensus estimates of the future credit of potential projects. As promised.
Some issues remain to consider. For example, how could we allow judging of pairs, and the choice of which pairs to judge, to be spread out across time, while allowing betting markets on choices that remain open to continue as long as possible into that process? Should judgements of credit just look at a project’s actual impact on desired outcomes, or should they also consider counterfactual impact, to correct for unforeseeable randomness, or others’ misbehavior? Should historians judge impact relative to resources used or available, or just judge impact without considering costs or opportunities? Might it work better to randomly pick particular an outcome of interest, and then only judge pairs on their impact re that outcome?