Tag Archives: Project

Advice Wiki

People often give advice to others; less often, they request advice from others. And much of this advice is remarkably bad. For example, such as the advice to “never settle” in pursuing your career dreams.

When A takes advice from B, that is often seen as raising the status of B and lowering that of A. As a result, people often resist listening to advice, they ask for advice as a way to flatter and submit, and they give advice as a way to assert their status and goodness. For example, advisors often tell others to do what they did, as a way to affirm that they have good morals, and achieved good outcomes via good choices.

These hidden motives understandably detract from the average quality of advice as a guide to action. And the larger is this quality reduction, the more potential there is for creating value via alternative advice institutions. I’ve previously suggested using decision markets for advice in many contexts. In this post, I want to explore a simpler/cheaper approach: a wiki full of advice polls. (This is like something I proposed in 2013.)

Imagine a website where you could browse a space of decision contexts, connected to each other by the subset relation. For example under “picking a career plan after high school”, there’s “picking a college attendance plan” and under that there’s “picking a college” and “picking a major”. For each decision context, people can submit proposed decision advice, such as “go to the highest ranked college you can get into” for “pick a college”. You and anyone could then vote to say which advice they endorse in which contexts, and you see the current voter distribution over advice opinion.

Assume participants can be anonymous if they so choose, but can also be labelled with their credentials. Assume that they can change their votes at anytime, and that the record of each vote notes which options were available at the time. From such voting records, we might see not just the overall distribution of opinion regarding some kind of decision, but also how that distribution varies with quality indicators, such as how much success a person has achieved in related life areas. One might also see how advice varies with level of abstraction in the decision space; is specific advice different from general advice?

Of course such poll results aren’t plausibly as accurate as those resulting from decision markets, at least given the same level of participation. But they should also be much easier to produce, and so might attract far more participation. The worse are our usual sources of advice, the higher the chance that these polls could offer better advice. Compared to asking your friends and family, these distributions of advice less suffer from particular people pushing particular agenda, and anonymous advice may suffer less from efforts to show off. At least it might be worth a try.

Added 1Aug: Note that decision context can include features of the decision maker, and that decision advice can include decision functions, which map features of the decision context to particular decisions.

GD Star Rating
loading...
Tagged as: , ,

Conditional Harberger Tax Games

Baron Georges-Eugène Haussmann … transformed Paris with dazzling avenues, parks and other lasting renovations between 1853 and 1870. … Haussmann… resolved early on to pay generous compensation to [Paris] property owners, and he did. … [He] hoped to repay the larger loans he obtained from the private sector by capturing some of the increased value of properties lining along the roads he built. … [He] did confiscate properties on both sides of his new thoroughfares, and he had their edifices rebuilt. … Council of State … forced him to return these beautifully renovated properties to their original owners, who thus captured all of their increased value. (more)

In my last post I described abstractly how a system of conditional Harberger taxes (CHT) could help deal with zoning and other key city land use decisions. In this post, let me say a bit more about the behaviors I think we’d actually see in such a system. (I’m only considering here such taxes for land and property tied to land.)

First, I while many property owners would personally manage their official declared property values, many others would have them set by an agent or an app. Agents and apps may often come packaged with insurance against various things that can go wrong, such as losing one’s property.

Second, yes, under CHT, sometimes people would (be paid well to) lose their property. This would almost always be because someone else credibly demonstrated that they expect to gain more value from it. Even if owners strategically or mistakenly declare values too low, the feature I suggested of being able to buy back a property by paying a 1% premium would ensure that pricing errors don’t cause property misallocations. The highest value uses of land can change, and one of the big positive features of this system is that it makes the usage changes that should then result easier to achieve. In my mind that’s a feature, not a bug. Yes, owners could buy insurance against the risk of losing a property, though that needn’t result in getting their property back.

In the ancient world, it was common for people to keep the same marriage, home, neighbors, job, family, and religion for their entire life. In the modern world, in contrast, we expect many big changes during our lifetimes. While we can mostly count on family and religion remaining constant, we must accept bigger chances of change to marriages, neighbors, and jobs. Even our software environments change in ways we can’t control when new versions are issued. Renters today accept big risks of home changes, and even home “owners” face big risks due to job and financial risks. All of which seems normal and reasonable. Yes, a few people seem quite obsessed with wanting absolute guarantees on preservation of old property usage, but I can’t sympathize much with such fetishes for inefficient stasis. Continue reading "Conditional Harberger Tax Games" »

GD Star Rating
loading...
Tagged as: , ,

Fine Grain Futarchy Zoning Via Harberger Taxes

Futarchy” is my proposed system of governance which approves a policy change when conditional prediction markets give a higher expected outcome, conditional on that change. In a city setting, one might be tempted to use a futarchy where the promoted outcome is the total property value of all land in and near that city. After all, if people don’t like being in this city, and are free to move elsewhere, city land won’t be worth much; the more attractive a city is as a place to be, the more its property will be worth.

Yes, we have problems measuring property values. Property is only traded infrequently, sale prices show a marginal not a total value, much land is never offered for sale, sales prices are often obscured by non-cash terms of trade, and regulations and taxes change sales and use. (E.g., rent control.) In addition, we expect at least some trading noise in the prices of any financial market. As a result, simple futarchy isn’t much help for decisions whose expected consequences for outcomes are smaller than its price noise level. And yes, there are other things one might care about beside property values. But given how badly city governance often actually goes, we could do a lot worse than to just consistently choose policies that maximize a reasonable estimate of city property value. The more precise such property estimates can be, the more effective such a futarchy could be.

Zoning (and other policy that limits land use) is an area of city policy that seems especially well suited to a futarchy based on total property value. After all, the main reason people say that we need zoning is because using some land in some ways decreases how much people are willing to pay to use other land. For example, people might not want to live next to a bar, liquor store, or sex toy store, are so are willing to pay less to buy (or rent) next to such a place. So choosing zoning rules to maximize total property value seems especially promising.

I’ve also written before favorably on Harberger taxes (which I once called “stability rents”). In this system, owners of land (and property tied to that land) must set and may continuously adjust a declared property “value”; they are taxed per unit time as a percentage of momentary value, and must always agree to sell their property at their currently declared value. This system has great advantages in inducing property to be held by those who can gain the most value from it, including via greatly lowering the transaction costs of putting together big property packages. With this system, there’s no more need for eminent domain.

I’ve just noticed a big synergy between futarchy for zoning and Harberger taxes. The reason is that such taxes allow the creation of prices which support a much finer grain accounting of the net value of specific zoning changes. Let me explain.

First, Harberger taxes create a continuous declared value on each property all the time, not just a few infrequent sales prices. This creates a lot more useful data. Second, these declared values better approximate the value that people place on property; the higher an actual value, the higher an owner will declare his or her taxable value to be, to avoid the risk of someone taking it away. Third, these declared values are all relative to a standard terms of trade, not the varying terms of actual sales today. Thus the sum total of all declared property values can be a decent estimate of total city property value. Fourth, it is possible to generalize the Harberger tax system to create zoning-conditional property ownership and prices.

That is, relative to current zoning rules, one can define a particular alternative zoning scenario, wherein the zoning (or other property use limit) policies have changed. Such as changing the zoning of a particular area from residential to commercial on a particular date. Given such a defined scenario, one can create conditional ownership; I own this property if (and when) this zoning change is made, but not otherwise. The usual ownership then becomes conditional on no zoning changes soon.

With conditional ownership, conditional owners can make conditional offers to sell. That is, you can buy my property under this condition if you pay this declared amount of conditional cash. For example, I might offer to make a conditional sale of my property for $100,000, and you might agree to that sale, but this sale only happens if a particular zoning change is approved.

The whole Harberger tax system can be generalized to support such conditional trading and prices. In the simple system, each property has a declared value set by its owner, and anyone can pay that amount at any time to become the new owner. In the generalized system, each property has a declared value for each (combination of) approved alternative zoning scenario. By default, alternative declared values are equal to the ordinary no-zoning-change declared value, but property owners can set them differently if they want, to be either higher or lower. Anyone can make a scenario-conditional purchase of a property from its current (conditional) owner at its scenario-conditional declared value. To buy a property for sure, buy it conditional on all scenarios.

(For concreteness, assume that only one zoning change proposal is allowed per day per city region, that a decision is made on that proposal in that day, and that the proposal for each day is chosen via open public auction a month before. The auction fee can subsidize markets in bets on if this proposal will be approved and markets in tax-revenue asset conditional differences (explained below). A week before the decision day of a proposal, each right in a property is split into two conditional rights, one conditional on this change and one on not-this-change. At that point, owner declared values conditional on this change (or not) become active sale prices. Taxes are paid in conditional cash. Physical control of a property only transfers to conditional owners if and when a zoning scenario is actually approved.)

Having declared values for all properties under all scenarios gives us even more data with which to estimate total city property value, and in particular helps with estimating the difference in total city property value due to a zoning change. To a first approximation, we can just add up all the zoning-change-conditional declared values, and compare that sum to the sum from the no-change declared values. If the former sum is consistently and clearly higher than the latter sum over the proposal’s decision day, that seems a good argument for adopting this zoning proposal. (It seems safer to choose the higher value option with a chance increasing in value difference, and this all works even when other factors influence a decision.) At least if the news that this zoning proposal seems likely be approved gets spread wide and fast enough for owners to express their conditional declared values. (The bet markets on which properties will be effected helps to notify owners.)

Actually, to calculate the net property value difference that a zoning change makes, we need only sum over the properties that actually have a conditional declared value different from its no-change declared value. For small local zoning changes, this might only be a small number of properties within a short distance of the main changes. As a result, this system seems capable of giving useful advice on very small and local zoning changes, in dramatic contrast to a futarchy based on prices estimating total city property values. For example, it might even be able to say if a particular liquor store should be allowed at a particular location, or if the number of required parking spots at a particular shopping mall can be reduced. As promised, this new system offers much finer grain accounting of the net value of specific zoning changes.

Note that in this system as described, losers are not compensated by winners for zoning rule changes, even though we can roughly identify winners and losers. I’ve thought a bit about ways to add a extra process by which winners compensate losers, but haven’t been able to make that work. So the best I can think of is to have the system look at the distribution of wins and losses, and reject proposed changes where there are too many big losers relative to winners. That would force a search for variations which spread out the pain more evenly.

We are close to a workable proposal, but not quite there yet. This is because we face the problem of owners temporarily inflating their declared values conditional on a zoning change that they seek to promote. This might tip the balance to get a change approved, and then after approval such owners could cut their declared values back down to something reasonable, and only pay a small extra tax for that small decision period. Harberger taxes impose a stronger penalty for declaring overly-low values than overly-high values.

A solution to this problem is to use, instead of declared values, prices for the purely financial assets that represent claims on all future tax revenue from the Harberger tax on a particular property. That is, each property will pay a tax over time, we could divert that revenue into a particular account, and an asset holder could own the right to spend a fraction of the funds in that account. Such tax-revenue assets could be bought and sold in financial markets, and could also be made conditional on particular zoning scenarios. As such assets are easy to create and duplicate, the usual speculation pressures should make it hard to manipulate these prices much in any direction.

A plan to temporarily inflate the declared value of a property shouldn’t do much to the market price for a claim to part of all future tax revenue from that property. So instead of summing over conditional differences in declared-values to see if a zoning change is good, it is probably better to sum over conditional differences in tax revenue assets. Subsidized continuous market makers can give exact if noisy prices for all such differences, and for most property-scenario pairs this difference will be exactly zero.

So that’s the plan for using futarchy and Harberger taxes to pick zoning (and other land use limit policy) changes. Instead of just one declared value per property, we allow owners to specify declared values conditional on each approved zoning change (or not) scenario, and allow conditional purchases as well. By default, conditional values equal no-change values. We should tend more to adopt zoning proposals when, during its decision day, when the sum of its (tax-revenue-asset) conditional differences clearly and consistently exceeds zero.

Thanks to Alex Tabarrok & Keller Scholl for their feedback.

Added 11pm: One complaint people have about a Harberger tax is that owners would feel stressed to know that their property could be taken at any time. Here’s a simple fix. When someone takes your property at your declared value, you can pay 1% of their new declared value to get it back, if you do so quickly. But then you’d better raise your declared value or someone else could do the same thing the next day or week. You pay 1% for a fair warning that your value is too low. Under this system, people only lose their property when someone else actually values it more highly, even after considering the transaction costs of switching property.

Added 2Feb: I edited this post a bit. Note that with severe enough property limits, negative declared property values can make sense. For example, if a property must be maintained so as to serve as a public park, the only people willing to become owners are those who get paid when they take the property, and then get paid per unit time for remanning owners. In this way, city services can be defined and provided via this same decision mechanism.

Added 11July: On reflection, there’s not much need for the special 1% grab-back rule I proposed above. While it might be good rhetoric to allay fears, it isn’t actually needed. In principle it could reduce your loss from setting too low a price, but in practice I don’t think it will be possible to underprice that much; speculators will buy underpriced assets intending to sell them back.

Assuming that there’s a standard delay in transferring property, the moment someone grabs your declared value price, they must declare a new value. So you are either willing to grab it back at that price, and then set a new higher value, or you accept that they have a higher value for the property and can keep it. If you grab it back and set a higher value, they can of course take it at that new value; you can in effect go back and forth in an auction to see who values it more. Each time they grab from you will regret not having set a higher value; so this won’t go many rounds and will be settled quickly.

GD Star Rating
loading...
Tagged as: , , ,

Replication Markets Team Seeks Journal Partners for Replication Trial

An open letter, from myself and a few colleagues:

Recent attempts to systematically replicate samples of published experiments in the social and behavioral sciences have revealed disappointingly low rates of replication. Many parties are discussing a wide range of options to address this problem.

Surveys and prediction markets have been shown to predict, at rates substantially better than random, which experiments will replicate. This suggests a simple strategy by which academic journals could increase the rate at which their published articles replicate. For each relevant submitted article, create a prediction market estimating its chance of replication, and use that estimate as one factor in deciding whether to publish that article.

We the Replication Markets Team seek academic journals to join us in a test of this strategy. We have been selected for an upcoming DARPA program to create prediction markets for several thousand scientific replication experiments, many of which could be based on articles submitted to your journal. Each market would predict the chance of an experiment replicating. Of the already-published experiments in the pool, approximately one in ten will be sampled randomly for replication. (Whether submitted papers could be included in the replication pool depends on other teams in the program.) Our past markets have averaged 70% accuracy, and the work is listed at the Science Prediction Market Project page, and has been published in Science, PNAS, and Royal Society Open Science.

While details are open to negotiation, our initial concept is that your journal would tell potential authors that you are favorably inclined toward experiment article submissions that are posted at our public archive of submitted articles. By posting their article, authors declare that they have submitted their article to some participating journal, though they need not say which one. You tell us when you get a qualifying submission, we quickly tell you the estimated chance of replication, and later you tell us of your final publication decision.

At this point in time we seek only an expression of substantial interest that we can take to DARPA and other teams. Details that may later be negotiated include what exactly counts as a replication, whether archived papers reveal author names, how fast we respond with our replication estimates, what fraction of your articles we actually attempt to replicate, and whether you privately give us any other quality indicators obtained in your reviews to assist in our statistical analysis.

Please RSVP to: Angela Cochran, PM acochran@replicationmarkets.com 571 225 1450

Sincerely, the Replication Markets Team

Thomas Pfeiffer (Massey University)
Yiling Chen, Yang Liu, and Haifeng Xu (Harvard University)
Anna Dreber Almenberg & Magnus Johannesson (Stockholm School of Economics)
Robin Hanson & Kathryn Laskey (George Mason University)

Added 2p: We plan to forecast ~8,000 replications over 3 years, ~2,000 within the first 15 months.  Of these, ~5-10% will be selected for an actual replication attempt.

GD Star Rating
loading...
Tagged as: , ,

Toward An Honest Consensus

Star Trek original series featured a smart computer that mostly only answered questions; humans made key decisions. Near the start of Nick Chater’s book The Mind Is Flat, which I recently started, he said early AI visions were based on the idea of asking humans questions, and then coding their answers into a computer, which might then answer the same range of questions when asked. But to the surprise of most, typical human beliefs turned out to be much too unstable, unreliable, incoherent, and just plain absent to make this work. So AI research turned to other approaches.

Which makes sense. But I’m still inspired by that ancient vision of an explicit accessible shared repository of what we all know, even if that isn’t based on AI. This is the vision that to varying degrees inspired encyclopedias, libraries, internet search engines, prediction markets, and now, virtual assistants. How can we all coordinate to create and update an accessible shared consensus on important topics?

Yes, today our world contains many social institutions that, while serving other functions, also function to create and update a shared consensus. While we don’t all agree with such consensus, it is available as a decent first estimate for those who do not specialize in a topic, facilitating an intellectual division of labor.

For example: search engines, academia, news media, encyclopedias, courts/agencies, consultants, speculative markets, and polls/elections. In many of these institutions, one can ask questions, find closest existing answers, induce the creation of new answers, induce elaboration or updates of older answers, induce resolution of apparent inconsistencies between existing answers, and challenge existing answers with proposed replacements. Allowed questions often include meta questions such as origins of, translations of, confidence in, and expected future changes in, other questions.

These existing institutions, however, often seem weak and haphazard. They often offer poor and biased incentives, use different methods for rather similar topics, leave a lot of huge holes where no decent consensus is offered, and tolerate many inconsistencies in the answers provided by different parts. Which raises the obvious question: can we understand the advantages and disadvantages of existing methods in different contexts well enough to suggest which ones we should use more or less where, or to design better variations, ones that offer stronger incentives, lower costs, and wider scope and integration?

Of course computers could contribute to such new institutions, but they needn’t be the only or even main parts. And of course the idea here is to come up with design candidates to test first at small scales, scaling up only when results look promising. Design candidates will seem more promising if we can at least imagine using them more widely, and if they are based on theories that plausibly explain failings of existing institutions. And of course I’m not talking about pressuring people to follow a consensus, just to make a consensus available to those who want to use it.

As usual, a design proposal should roughly describe what acts each participant can do when, what they each know about what others have done, and what payoffs they each get for the main possible outcomes of typical actions. All in a way that is physically, computationally, and financially feasible. Of course we’d like a story about why equilibria of such a system are likely to produce accurate answers fast and at low cost, relative to other possible systems. And we may need to also satisfy hidden motives, the unacknowledged reasons for why people actually like existing institutions.

I have lots of ideas for proposals I’d like the world to consider here. But I realized that perhaps I’ve neglected calling attention to the problem itself. So I’ve written this post in the hope of inspiring some of you with a challenge: can you help design (or test) new robust ways to create and update a social consensus?

GD Star Rating
loading...
Tagged as: , ,

Choose: Allies or Accuracy

Imagine that person A tells you something flattering or unflattering about person B. All else equal, this should move your opinion of B in the direction of A’s claim. But how far? If you care mainly about accuracy, you’ll want to take into account base rates on claimers A and targets B, as well as more specific specific signs on the accuracy of A regarding B.

But what if you care mainly about seeming loyal to your allies? Well if A is more of your ally than is B, as suggested by your listening now to A, then you’ll be more inclined to just believe A, no matter what. Perhaps if other allies give a different opinion, you’ll have to decide which of your allies to back. But if not, trying to be accurate on B mainly risks seeming disloyal to A and you’re other allies.

It seems that humans tend to just believe gossip like this, mostly ignoring signs of accuracy:

The trustworthiness of person-related information … can vary considerably, as in the case of gossip, rumors, lies, or “fake news.” …. Social–emotional information about the (im)moral behavior of previously unknown persons was verbally presented as trustworthy fact (e.g., “He bullied his apprentice”) or marked as untrustworthy gossip (by adding, e.g., allegedly), using verbal qualifiers that are frequently used in conversations, news, and social media to indicate the questionable trustworthiness of the information and as a precaution against wrong accusations. In Experiment 1, spontaneous likability, deliberate person judgments, and electrophysiological measures of emotional person evaluation were strongly influenced by negative information yet remarkably unaffected by the trustworthiness of the information. Experiment 2 replicated these findings and extended them to positive information. Our findings demonstrate a tendency for strong emotional evaluations and person judgments even when they are knowingly based on unclear evidence. (more; HT Rolf Degen)

I’ve toyed with the idea of independent juries to deal with Twitter mobs. Pay a random jury a modest amount to 1) read a fuller context and background on the participants, 2) talk a bit among themselves, and then 3) choose which side they declare as more reasonable. Sure sometimes the jury would hang, but often they could give a voice of reason that might otherwise be drown out by loud participants. I’d have been willing to pay for this a few times. And once juries became a standard thing, we could lower costs via making prediction markets on jury verdicts if a case were randomly choose for jury evaluation.

But alas, I’m skeptical that most would care much about what an independent jury is estimated to say, or even about what it actually says. For that, they’d have to care more about truth than about showing support for allies.

GD Star Rating
loading...
Tagged as: , ,

Can Foundational Physics Be Saved?

Thirty-four years ago I left physics with a Masters degree, to start a nine year stint doing AI/CS at Lockheed and NASA, followed by 25 years in economics. I loved physics theory, and given how far physics had advanced over the previous two 34 year periods, I expected to be giving up many chances for glory. But though I didn’t entirely leave (I’ve since published two physics journal articles), I’ve felt like I dodged a bullet overall; physics theory has progressed far less in the last 34 years, mainly because data dried up:

One experiment after the other is returning null results: No new particles, no new dimensions, no new symmetries. Sure, there are some anomalies in the data here and there, and maybe one of them will turn out to be real news. But experimentalists are just poking in the dark. They have no clue where new physics may be to find. And their colleagues in theory development are of no help.

In her new book Lost in Math, theoretical physicist Sabine Hossenfelder describes just how bad things have become. Previously, physics foundations theorists were disciplined by a strong norm of respecting the theories that best fit the data. But with less data, theorists have turned to mainly judging proposed theories via various standards of “beauty” which advocates claim to have inferred from past patterns of success with data. Except that these standards (and their inferences) are mostly informal, change over time, differ greatly between individuals and schools of thought, and tend to label as “ugly” our actual best theories so far.

Yes, when data is truly scarce, theory must suggest where to look, and so we must choose somehow among as-yet-untested theories. The worry is that we may be choosing badly:

During experiments, the LHC creates about a billion proton-proton collisions per second. … The events are filtered in real time and discarded unless an algorithm marks them as interesting. From a billion events, this “trigger mechanism” keeps only one hundred to two hundred selected ones. … That CERN has spent the last ten years deleting data that hold the key to new fundamental physics is what I would call the nightmare scenario.

One bad sign is that physicists have consistently, confidently, and falsely told each other and the public that big basic progress was coming soon: Continue reading "Can Foundational Physics Be Saved?" »

GD Star Rating
loading...
Tagged as: , , ,

Open Policy Evaluation

Hypocrisy is a tribute vice pays to virtue. La Rochefoucauld, Maximes

In some areas of life, you need connections to do anything. Invitations to parties, jobs, housing, purchases, business deals, etc. are all gained via private personal connections. In other areas of life, in contrast, invitations are made open to everyone. Posted for all to see are openings for jobs, housing, products to buy, business investment, calls for proposals for contracts and grants, etc. The connection-only world is often suspected of nepotism and corruption, and “reforms” often take the form of requiring openings to be posted so that anyone can apply.

In academia, we post openings for jobs, school attendance, conference attendance, journal publications, and grant applications for all to see. Even though most people know that you’ll actually need personal connections to have much of a chance for many of these things. People seems to want to appear willing to consider an application from anyone. They allow some invitation-only conferences, talk series, etc., but usually insist that such things are incidental, not central to their profession.

This preference for at least an appearance of openness suggests a general strategy of reform: find things that are now only gained via personal connections, and create an alternate open process whereby anyone can officially apply. In this post, I apply this idea to: policy proposals.

Imagine that you have a proposal for a better policy, to be used by governments, businesses, or other organizations. How can you get people to listen to your proposal, and perhaps endorse it or apply it? You might try to use personal connections to get an audience with someone at a government agency, political interest group, think tank, foundation, or business. But that’s stuck in the private connection world. You might wait for an agency or foundation to put out an open call for proposals, seeking a solution to exactly the problem your proposal solves. But for any one proposal idea, you might wait a very long time.

You might submit an article to an open conference or journal, or submit a book to a publisher. But if they accept your submission, that mostly won’t be an endorsement of whether your proposal is good policy by some metric. Publishers are mostly looking at other criteria, such as whether you have an impressive study using difficult methods, or whether you have a book thesis and writing style that will attract many readers.

So I propose that we consider creating an open process for submitting policy proposals to be evaluated, in the hope of gaining some level of endorsement and perhaps further action. This process won’t judge your submission on wit, popularity, impressiveness, or analytical rigor. Their key question is: is this promising as a policy proposal to actually adopt, for the purpose of making a better world? If they endorse your proposal, then other actors can use that as a quality signal regarding what policy proposals to consider.

Of course how you judge a policy proposal depends on your values. So there might be different open policy evaluators (OPE) based on different sets of values. Each OPE needs to have some consistent standards by which they evaluate proposals. For example, economists might ask whether a proposal improves economic efficiency, libertarians might ask if it increases liberty, and progressives might ask whether it reduces inequality.

Should the evaluation of a proposal consider whether there’s a snowball chance in hell of a proposal being actually adopted, or even officially considered? That is, whether it is in the “Overton window”? Should they consider whether you have so far gained sufficient celebrity endorsements to make people pay attention to your proposal? Well, those are choices of evaluation criteria. I’m personally more interested in evaluating proposals regardless of who has supported them, and regardless of their near-term political feasibility. Like how academics say we do today with journal article submissions. But that’s just me.

An OPE seems valid and useful as long as its actual choices of which policies it endorses match its declared evaluation criteria. Then it can serve as a useful filter, between people with innovative policy ideas and policy customers seeking useful ideas to consider and perhaps implement. If you can find OPEs who share your evaluation criteria, you can consider the policies they endorse. And of course if we ever end up having many of them, you could focus first on the most prestigious ones.

Ideally an OPE would have funding from some source to pay for its evaluations. But I could also imagine applicants having to pay a fee to have their proposals considered.

GD Star Rating
loading...
Tagged as: , ,

How To Fund Prestige Science

How can we best promote scientific research? (I’ll use “science” broadly in this post.) In the usual formulation of the problem, we have money and status that we could distribute, and they have time and ability that they might apply. They know more than we do, but we aren’t sure who is how good, and they may care more about money and status than about achieving useful research. So we can’t just give things to anyone who claims they would use it to do useful science. What can we do? We actually have many options. Continue reading "How To Fund Prestige Science" »

GD Star Rating
loading...
Tagged as: , ,

Toward Micro-Likes

Long ago when electricity and phones were new, they were largely unregulated, and privately funded. But then as the tech (and especially the interfaces) stopped changing so fast, and showed big scale and network economies, regulation stepped in. Today social media still seems new. But as it hasn’t been changing as much lately, and it also shows large scale and network economies, many are talking now about heavier regulation. In this post, let me suggest that a lot more change is possible; we aren’t near the sort of stability that electricity and phones reached when they became heavily regulated.

Back in the early days of the web and internet people predicted many big radical changes. Yet few then mentioned social media, the application now most strongly associated with this new frontier. What did we miss? The usual story, which I find plausible, is that we missed just how much people love to get many frequent signals of their social connections: likes, retweets, etc. Social media gives us more frequent “attaboy” and “we see & like you” signals. People care more than we realized about the frequency, relative to the size, of such signals.

But if that’s the key lesson, social media should be able to move a lot further in this direction. For example, today Facebook has two billion monthly users and produces four million likes per minute, for an average of about three likes per day per monthly user. Twitter has 300 million monthly users, who send 500 million tweets per day, for less than two tweets per day per monthly user. (I can’t find stats on Twitter likes or retweets.) Which I’d say is actually a pretty low rate of positive feedback.

Imagine you had a wall-sized screen, full of social media items, and that while you browsed this wall the direction of your gaze was tracked continuously to see which items your gaze was on or near. From that info, one could give the authors or subjects of those items far more granular info on who is paying how much attention to them. Not only on how often how much your stuff is watched, but also on the mood and mental state of those watchers. If some of those items were continuous video feeds from other people, then those others could be producing many more social media items to which others could attend.

Also, so far we’ve usually just naively counted likes, retweets, etc., as if everyone counted the same. But we could instead use non-uniform weights based on popularity or other measures. And given how much people like to participate in synchronized rituals, we could also create and publicize statistics on what groups of people are how synchronized in their social media actions. And offer new tools to help them synchronize more finely.

My point here isn’t to predict or recommend specific changes for future social media. I’m instead just trying to make the point that a lot of room for improvement remains. Such gains might be delayed or prevented by heavy regulation.

GD Star Rating
loading...
Tagged as: , ,