Tag Archives: Info

Toward An Honest Consensus

Star Trek original series featured a smart computer that mostly only answered questions; humans made key decisions. Near the start of Nick Chater’s book The Mind Is Flat, which I recently started, he said early AI visions were based on the idea of asking humans questions, and then coding their answers into a computer, which might then answer the same range of questions when asked. But to the surprise of most, typical human beliefs turned out to be much too unstable, unreliable, incoherent, and just plain absent to make this work. So AI research turned to other approaches.

Which makes sense. But I’m still inspired by that ancient vision of an explicit accessible shared repository of what we all know, even if that isn’t based on AI. This is the vision that to varying degrees inspired encyclopedias, libraries, internet search engines, prediction markets, and now, virtual assistants. How can we all coordinate to create and update an accessible shared consensus on important topics?

Yes, today our world contains many social institutions that, while serving other functions, also function to create and update a shared consensus. While we don’t all agree with such consensus, it is available as a decent first estimate for those who do not specialize in a topic, facilitating an intellectual division of labor.

For example: search engines, academia, news media, encyclopedias, courts/agencies, consultants, speculative markets, and polls/elections. In many of these institutions, one can ask questions, find closest existing answers, induce the creation of new answers, induce elaboration or updates of older answers, induce resolution of apparent inconsistencies between existing answers, and challenge existing answers with proposed replacements. Allowed questions often include meta questions such as origins of, translations of, confidence in, and expected future changes in, other questions.

These existing institutions, however, often seem weak and haphazard. They often offer poor and biased incentives, use different methods for rather similar topics, leave a lot of huge holes where no decent consensus is offered, and tolerate many inconsistencies in the answers provided by different parts. Which raises the obvious question: can we understand the advantages and disadvantages of existing methods in different contexts well enough to suggest which ones we should use more or less where, or to design better variations, ones that offer stronger incentives, lower costs, and wider scope and integration?

Of course computers could contribute to such new institutions, but they needn’t be the only or even main parts. And of course the idea here is to come up with design candidates to test first at small scales, scaling up only when results look promising. Design candidates will seem more promising if we can at least imagine using them more widely, and if they are based on theories that plausibly explain failings of existing institutions. And of course I’m not talking about pressuring people to follow a consensus, just to make a consensus available to those who want to use it.

As usual, a design proposal should roughly describe what acts each participant can do when, what they each know about what others have done, and what payoffs they each get for the main possible outcomes of typical actions. All in a way that is physically, computationally, and financially feasible. Of course we’d like a story about why equilibria of such a system are likely to produce accurate answers fast and at low cost, relative to other possible systems. And we may need to also satisfy hidden motives, the unacknowledged reasons for why people actually like existing institutions.

I have lots of ideas for proposals I’d like the world to consider here. But I realized that perhaps I’ve neglected calling attention to the problem itself. So I’ve written this post in the hope of inspiring some of you with a challenge: can you help design (or test) new robust ways to create and update a social consensus?

GD Star Rating
loading...
Tagged as: , ,

Info Cuts Charity

Our culture tends to celebrate the smart, creative, and well-informed. So we tend to be blind to common criticisms of such folks. A few days ago I pointed out that creative folk tend to cheat more. Today I’ll point out that the well-informed tend to donate less to charity:

The best approach for a charity raising money to feed hungry children in Mali, the team found, was to simply show potential donors a photograph of a starving child and tell them her name and age. Donors who were shown more contextual information about famine in Africa — the ones who were essentially given more to think about — were less likely to give. …

Daniel Oppenheimer … found that simply giving people information about a charity’s overhead costs makes them less likely to donate to it. This held true, remarkably, even if the information was positive and indicated and the charity was extremely efficient. …

According to [John] List, thinking about all the people you’re not helping when you donate  …  makes the act of giving a lot less satisfying. (more; HT  Reihan Salam)

GD Star Rating
loading...
Tagged as: ,

Fear Causes Trust, Blindness

Three years ago I reported on psych studies suggesting that we trust because we fear:

High levels of support often observed for governmental and religious systems can be explained, in part, as a means of coping with the threat posed by chronically or situationally fluctuating levels of perceived personal control. (more)

New studies lay out this process in more detail:

In the domain of energy, … when individuals [were made to] feel unknowledgeable about an issue, participants increasingly trusted in the government to manage various environmental technologies, and increasingly supported the status quo in how the government makes decisions regarding the application of those technologies. … When people felt unknowledgeable with social issues, they felt more dependent on the government, which lead to increased trust.

When they feel unknowledgeable about a threatening social issue, … [people] also appear motivated to avoid learning new information about it. … In the context of an imminent oil shortage—as opposed to a distant one—participants who felt that the issue was “above their heads” reported an increased desire to adopt an “ignorance is bliss” mentality toward that issue, relative to those who saw oil management as a relatively simple issue.

This effect … is at least partly due to participants’ desire to protect their faith in the capable hands of the government. Among those who felt more affected by the recession, experimentally increasing domain complexity eliminated the tendency to seek out information. These individuals avoided not only negative information but also vague information, that is, the types of information that held the potential (according to pretesting) to challenge the idea that the government can manage the economy. Positive information was not avoided in the same way. (more)

I (again) suspect we act similarly toward medicine, law, and other authorities: we trust them more when we feel vulnerable to them, and we then avoid info that might undermine such trust. It is extremely important that we understand how this works, so that we can find ways around it. This is my guess for humanity’s biggest failing.

GD Star Rating
loading...
Tagged as: , ,