6 Comments

I have found my calling.

Expand full comment

Agreed, it's a deep and difficult question.

I think that to really look for solutions the concept would probably need to be broken down more. Verification is a single word for what is likely a complex process involving various numerical methods combined with multiple complex social phenomena.

For example, the very concept of verification requires a reference to some authority or authorities; it's impossible to verify something unless you have something to verify it against. But the question of who decides what counts as a valid authority is very different from the question of whether some information matches the information approved by the authority; matching information to the source is likely much easier than agreeing who the valid authorities are.

And even if we assume we could all could agree what the valid authorities were (not realistic, but let's assume) - how do we then handle it when multiple recognized authorities disagree? This is crucial, since contradictions happen all the time even within science, to say nothing of the contrasting views of science, religion, and various other ideologies.

So at a bare minimum, I think determining what's verifiable probably requires answering at least 3 questions:

1. Which sources count as authorities?2. In cases of conflict, how much weight should be given to each of these sources?3. How well does the information to be verified match each of these sources?

Of course almost every single person will differ in their answers to these. And it's likely that the vast majority of people aren't even consciously aware of what their answers are, and will have varied and likely inconsistent answers for different information.

The value of game theory or similar advanced approaches is that they might be able to consider verifiability without having to break things down like this. The challenge, though, is that without more clarity on these, I'm not sure verifiability rises above being more than just an aggregation of opinion, however it may be performed. And opinion, even among highly experienced and aware people, can still be swayed to reflect agendas rather than facts; in fact, with enough media influence, opinion can probably be swayed significantly on almost any issue (as in the case of the authoritarian regime).

So without a deeper breakdown, I'm not sure verifiability can become more than a fancy opinion/propaganda meter.

Last note: Verifiability is not really a matter of saying something is or is not verified (black-or-white). For all but pure dogmatists it's necessarily a probability function, where we're trying to gain some measure of confidence that the information is correct (e.g. matches the authoritative sources). But I don't think this changes the minimum questions that need to be addressed to attempt to improve our understanding of it.

Expand full comment

The first bullet point is very interesting. It's like the machine intelligence age equivalent of asking a neutral/unknown village idiot the answer and betting on their response.

The pro in its favor is that it seems one of the ways historically that people have figured to resolve bets. Unfortunately, the mechanism has broken down now that marketers and pollsters understand the distribution of beliefs of village idiots so well, that they can't be relied on as pure black boxes. People can bet based directly on their knowledge of the average village idiot, rather than on the actual question itself.

By this measure, the pristine village idiot (that we're confident is a black box) has a value that is highly under-rated, and can no longer be found in information rich society.

Expand full comment

"The question of what is verifiable opens an important meta question: how can can we verify claims of verifiability?"

You cannot, because of Gödel's incompleteness theorem?

Expand full comment

Some ideas:* Have a black box machine learning model such that participants don't know how it works, but they believe it works at least a bit better than random guessing based on prior probabilities. Have them bet on what the model will say, since that info is verifiable.* Play the "schelling point game", where participants are asked to secretly share some piece of information and reward every participant who shares the most common response. Restrict to people who ought to know the info. Over time, weight the votes of previous round winners more. Maybe include machine learning models as some participants to improve honesty.* Play the "double crux game", where any disagreement regarding something unverifiable is reduced to an assertion which is simpler to check, which is reduced to an assertion that is simpler to check, which is eventually reduced to an assertion that all agree is verifiable.* Rely on institutions which have a financial incentive to maintain a truthful reputation such as auditing firms.* Rely on polls e.g. using services like mechanical turk. Sure public opinion doesn't track truth very well, but maybe shifts in public opinion are correlated with truth, and they could be bet on. This could be another thing to aggregate with the machine learning and schelling point stuff.

Expand full comment

Seeing verification as a social construct sounds right to me. At first it seems to muddy the project but may actually provide the solution. For example, this paper by Immorlica, Jackson, and Weyl studies identify verification. While applied here to identity, the idea of verification by comparing overlapping information sets generalizes. The standard common knowledge assumption stands at one extreme.

Expand full comment