Tag Archives: Info

What Info Is Verifiable?

For econ topics where info is relevant, including key areas of mechanism design, and law & econ, we often make use of a key distinction: verifiable versus unverifiable info. For example, we might say that whether it rains in your city tomorrow is verifiable, but whether you feel discouraged tomorrow is not verifiable. 

Verifiable info can much more easily be the basis of a contract or a legal decision. You can insure yourself against rain, but not discouragement, because insurance contracts can refer to the rain, and courts can enforce those contract terms. And as courts can also enforce bets about rain, prediction markets can incentivize accurate forecasts on rain. Without that, you have to resort to the sort of mechanisms I discussed in my last post. 

Often, traffic police can officially pull over a car only if they have a verifiable reason to think some wrong has been done, but not if they just have a hunch. In the blockchain world, things that are directly visible on the blockchain are seen as verifiable, and thus can be included in smart contracts. However, blockchain folks struggle to make “oracles” that might allow other info to be verifiable, including most info that ordinary courts now consider to be verifiable. 

Wikipedia is a powerful source of organized info, but only info that is pretty directly verifiable, via cites to other sources. The larger world of media and academia can say many more things, via its looser and more inclusive concepts of “verifiable”. Of course once something is said in those worlds, it can then be said on Wikipedia via citing those other sources.

I’m eager to reform many social institutions more in the direction of paying for results. But these efforts are limited by the kinds of results that can be verified, and thus become the basis of pay-for-results contracts. In mechanism design, it is well known that it is much easier to design mechanisms that get people to reveal and act on verifiable info. So the long term potential for dramatic institution gains may depend crucially on how much info can be made verifiable. The coming hypocralypse may result from the potential to make widely available info into verifiable info. More direct mind-reading tech might have a similar effect. 

Given all this reliance on the concept of verifiability, it is worth noting that verifiability seems to be a social construct. Info exists in the universe, and the universe may even be made out of info, but this concept of verifiability seems to be more about when you can get people to agree on a piece of info. When you can reliably ask many difference sources and they will all confidently tell you the same answer, we tend to treat that as verifiable. (Verifiability is related to whether info is “common knowledge” or “common belief”, but the concepts don’t seem to be quite the same.)

It is a deep and difficult question what actually makes info verifiable. Sometimes when we ask the same question to many people, they will coordinate to tell us the answer that we or someone wants to hear, or will punish them for contradicting. But at other times when we ask many people the same question, it seems like their best strategy is just to look directly at the “truth” and report that. Perhaps because they find it too hard to coordinate, or because implicit threats are weak or ambiguous. 

The question of what is verifiable opens an important meta question: how can can we verify claims of verifiability? For example, a totalitarian regime might well insist not only that everyone agree that the regime is fair and kind, a force for good, but that they agree that these facts are clear and verifiable. Most any community with a dogma may be tempted to claim not only that their dogma is true, but also that it is verifiable. This can allow such dogma to be the basis for settling contract disputes or other court rulings, such as re crimes of sedition or treason.

I don’t have a clear theory or hypothesis to offer here, but while this was in my head I wanted to highlight the importance of this topic, and its apparent openness to investigation. While I have no current plans to study this, it seems quite amenable to study now, at least by folks who understand enough of both game theory and a wide range of social phenomena.  

Added 3Dec: Here is a recent paper on how easy mechanisms get when info is verifiable.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Advice Wiki

People often give advice to others; less often, they request advice from others. And much of this advice is remarkably bad. For example, such as the advice to “never settle” in pursuing your career dreams.

When A takes advice from B, that is often seen as raising the status of B and lowering that of A. As a result, people often resist listening to advice, they ask for advice as a way to flatter and submit, and they give advice as a way to assert their status and goodness. For example, advisors often tell others to do what they did, as a way to affirm that they have good morals, and achieved good outcomes via good choices.

These hidden motives understandably detract from the average quality of advice as a guide to action. And the larger is this quality reduction, the more potential there is for creating value via alternative advice institutions. I’ve previously suggested using decision markets for advice in many contexts. In this post, I want to explore a simpler/cheaper approach: a wiki full of advice polls. (This is like something I proposed in 2013.)

Imagine a website where you could browse a space of decision contexts, connected to each other by the subset relation. For example under “picking a career plan after high school”, there’s “picking a college attendance plan” and under that there’s “picking a college” and “picking a major”. For each decision context, people can submit proposed decision advice, such as “go to the highest ranked college you can get into” for “pick a college”. You and anyone could then vote to say which advice they endorse in which contexts, and you see the current voter distribution over advice opinion.

Assume participants can be anonymous if they so choose, but can also be labelled with their credentials. Assume that they can change their votes at anytime, and that the record of each vote notes which options were available at the time. From such voting records, we might see not just the overall distribution of opinion regarding some kind of decision, but also how that distribution varies with quality indicators, such as how much success a person has achieved in related life areas. One might also see how advice varies with level of abstraction in the decision space; is specific advice different from general advice?

Of course such poll results aren’t plausibly as accurate as those resulting from decision markets, at least given the same level of participation. But they should also be much easier to produce, and so might attract far more participation. The worse are our usual sources of advice, the higher the chance that these polls could offer better advice. Compared to asking your friends and family, these distributions of advice less suffer from particular people pushing particular agenda, and anonymous advice may suffer less from efforts to show off. At least it might be worth a try.

Added 1Aug: Note that decision context can include features of the decision maker, and that decision advice can include decision functions, which map features of the decision context to particular decisions.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Toward An Honest Consensus

Star Trek original series featured a smart computer that mostly only answered questions; humans made key decisions. Near the start of Nick Chater’s book The Mind Is Flat, which I recently started, he said early AI visions were based on the idea of asking humans questions, and then coding their answers into a computer, which might then answer the same range of questions when asked. But to the surprise of most, typical human beliefs turned out to be much too unstable, unreliable, incoherent, and just plain absent to make this work. So AI research turned to other approaches.

Which makes sense. But I’m still inspired by that ancient vision of an explicit accessible shared repository of what we all know, even if that isn’t based on AI. This is the vision that to varying degrees inspired encyclopedias, libraries, internet search engines, prediction markets, and now, virtual assistants. How can we all coordinate to create and update an accessible shared consensus on important topics?

Yes, today our world contains many social institutions that, while serving other functions, also function to create and update a shared consensus. While we don’t all agree with such consensus, it is available as a decent first estimate for those who do not specialize in a topic, facilitating an intellectual division of labor.

For example: search engines, academia, news media, encyclopedias, courts/agencies, consultants, speculative markets, and polls/elections. In many of these institutions, one can ask questions, find closest existing answers, induce the creation of new answers, induce elaboration or updates of older answers, induce resolution of apparent inconsistencies between existing answers, and challenge existing answers with proposed replacements. Allowed questions often include meta questions such as origins of, translations of, confidence in, and expected future changes in, other questions.

These existing institutions, however, often seem weak and haphazard. They often offer poor and biased incentives, use different methods for rather similar topics, leave a lot of huge holes where no decent consensus is offered, and tolerate many inconsistencies in the answers provided by different parts. Which raises the obvious question: can we understand the advantages and disadvantages of existing methods in different contexts well enough to suggest which ones we should use more or less where, or to design better variations, ones that offer stronger incentives, lower costs, and wider scope and integration?

Of course computers could contribute to such new institutions, but they needn’t be the only or even main parts. And of course the idea here is to come up with design candidates to test first at small scales, scaling up only when results look promising. Design candidates will seem more promising if we can at least imagine using them more widely, and if they are based on theories that plausibly explain failings of existing institutions. And of course I’m not talking about pressuring people to follow a consensus, just to make a consensus available to those who want to use it.

As usual, a design proposal should roughly describe what acts each participant can do when, what they each know about what others have done, and what payoffs they each get for the main possible outcomes of typical actions. All in a way that is physically, computationally, and financially feasible. Of course we’d like a story about why equilibria of such a system are likely to produce accurate answers fast and at low cost, relative to other possible systems. And we may need to also satisfy hidden motives, the unacknowledged reasons for why people actually like existing institutions.

I have lots of ideas for proposals I’d like the world to consider here. But I realized that perhaps I’ve neglected calling attention to the problem itself. So I’ve written this post in the hope of inspiring some of you with a challenge: can you help design (or test) new robust ways to create and update a social consensus?

GD Star Rating
a WordPress rating system
Tagged as: , ,

Info Cuts Charity

Our culture tends to celebrate the smart, creative, and well-informed. So we tend to be blind to common criticisms of such folks. A few days ago I pointed out that creative folk tend to cheat more. Today I’ll point out that the well-informed tend to donate less to charity:

The best approach for a charity raising money to feed hungry children in Mali, the team found, was to simply show potential donors a photograph of a starving child and tell them her name and age. Donors who were shown more contextual information about famine in Africa — the ones who were essentially given more to think about — were less likely to give. …

Daniel Oppenheimer … found that simply giving people information about a charity’s overhead costs makes them less likely to donate to it. This held true, remarkably, even if the information was positive and indicated and the charity was extremely efficient. …

According to [John] List, thinking about all the people you’re not helping when you donate  …  makes the act of giving a lot less satisfying. (more; HT  Reihan Salam)

GD Star Rating
a WordPress rating system
Tagged as: ,

Fear Causes Trust, Blindness

Three years ago I reported on psych studies suggesting that we trust because we fear:

High levels of support often observed for governmental and religious systems can be explained, in part, as a means of coping with the threat posed by chronically or situationally fluctuating levels of perceived personal control. (more)

New studies lay out this process in more detail:

In the domain of energy, … when individuals [were made to] feel unknowledgeable about an issue, participants increasingly trusted in the government to manage various environmental technologies, and increasingly supported the status quo in how the government makes decisions regarding the application of those technologies. … When people felt unknowledgeable with social issues, they felt more dependent on the government, which lead to increased trust.

When they feel unknowledgeable about a threatening social issue, … [people] also appear motivated to avoid learning new information about it. … In the context of an imminent oil shortage—as opposed to a distant one—participants who felt that the issue was “above their heads” reported an increased desire to adopt an “ignorance is bliss” mentality toward that issue, relative to those who saw oil management as a relatively simple issue.

This effect … is at least partly due to participants’ desire to protect their faith in the capable hands of the government. Among those who felt more affected by the recession, experimentally increasing domain complexity eliminated the tendency to seek out information. These individuals avoided not only negative information but also vague information, that is, the types of information that held the potential (according to pretesting) to challenge the idea that the government can manage the economy. Positive information was not avoided in the same way. (more)

I (again) suspect we act similarly toward medicine, law, and other authorities: we trust them more when we feel vulnerable to them, and we then avoid info that might undermine such trust. It is extremely important that we understand how this works, so that we can find ways around it. This is my guess for humanity’s biggest failing.

GD Star Rating
a WordPress rating system
Tagged as: , ,