What’s your opinion of the prediction solicitation platform metaculus? That platform seems scalable, widely applicable, very cheap to run, with decent incentives for accuracy.
That's not very true, though. While Wikipedia officially says "no original research", in practice there is a lot of low key consensus building going on in the talk pages, especially those around controversial articles.
But more importantly, Wikipedia presently does much of what you describe, at massive scale, and will continue to do so. New consensus building institutions cannot credibly ignore it, they have to try to compete with it (good luck) or interface with it in some form.
For example, a prediction market that was accepted as a source on Wikipedia (without the usual routine of needing to be cited in a major news outlet first) would have a gigantic advantage over other prediction markets.
So I think it makes a lot of sense to think about how to get Wikipedia to accept new kinds of consensus, and have it handle the distribution, translation etc, rather than focus on building a standalone thing.
Wikipedia's editing rules only allow it to communicate a few kinds of consensus achieved in a few other institutions. They don't allow it to generate consensus on its own.
One thing that seems desirable in a system like this is that for each question it addresses, it be able to represent the arguments for and against various possible answers. This is lots of work but has many benefits (as a pedagogical tool, as a vehicle for achieving consensus via structured argumentation, as a representation of the relationships among propositions, etc.)
I have a vague memory that people at Xerox PARC may have been working on such a system (perhaps called ConsenSys?) In the 80s, but don't know if anything came of it.
I misinterpreted you to be referring to old-style printed encyclopedias, with articles written by paid experts in each domain, and infrequent updates. Many of your other requested features fail with that model, but Wikipedia seems to have made strides in the direction you are advocating.
But you say the problem is the limited range of topics that Wikipedia allows? Interesting! Would you mind listing a few such sample topics where you would like to offer humanity's consensus (but Wikipedia disallows), in order to help your readers (or, at least, me) understand what is missing via some concrete examples?
Expert systems were not able to take into account the contextual nature of queries because the human experts were using many non explicit cues in order to navigate that contextual information. I'd say our definition of expert often includes the notion that they are doing much of their heuristic selection with System 1. Underestimation of the intractable size (exponential) of this parameter space led to the early failures. We've since made great strides in two key areas: raw compute and fast pruning of parameter space via intermediate results.One question is, how can we give experts prompts that expose implicit models? We have a clue from the prediction literature in the form of judgmental bootstrapping. Are there general principles at play in the way experts learn to prune a really big decision tree down to a manageable one that calls their attention to the key causal levers? How might an org leverage such knowledge to make money in areas where expert time and attention is limited?
Isn't Wikipedia a kind of "explicit accessible shared repository of what we all know"? Doesn't it offer its own method to "coordinate to create and update an accessible shared consensus on important topics"? I assume you find something missing from this existing example, but it isn't clear (to me) from your post, what about Wikipedia doesn't satisfy your vision.
Why?
There's a reason most essays use full sentences, with verbs and everything.
Play money incentives can be pretty weak.
What’s your opinion of the prediction solicitation platform metaculus? That platform seems scalable, widely applicable, very cheap to run, with decent incentives for accuracy.
Once you generate new consensus institutions, yes, you'd love to get Wikipedia to accept their conclusions.
That's not very true, though. While Wikipedia officially says "no original research", in practice there is a lot of low key consensus building going on in the talk pages, especially those around controversial articles.
But more importantly, Wikipedia presently does much of what you describe, at massive scale, and will continue to do so. New consensus building institutions cannot credibly ignore it, they have to try to compete with it (good luck) or interface with it in some form.
For example, a prediction market that was accepted as a source on Wikipedia (without the usual routine of needing to be cited in a major news outlet first) would have a gigantic advantage over other prediction markets.
So I think it makes a lot of sense to think about how to get Wikipedia to accept new kinds of consensus, and have it handle the distribution, translation etc, rather than focus on building a standalone thing.
Desirable sure, but current institutions have much worse problem than a lack of some desirable features.
Wikipedia's editing rules only allow it to communicate a few kinds of consensus achieved in a few other institutions. They don't allow it to generate consensus on its own.
They have a very limited range of applicability.
What's your opinion of crypto-solutions like Augur?
One thing that seems desirable in a system like this is that for each question it addresses, it be able to represent the arguments for and against various possible answers. This is lots of work but has many benefits (as a pedagogical tool, as a vehicle for achieving consensus via structured argumentation, as a representation of the relationships among propositions, etc.)
I have a vague memory that people at Xerox PARC may have been working on such a system (perhaps called ConsenSys?) In the 80s, but don't know if anything came of it.
I misinterpreted you to be referring to old-style printed encyclopedias, with articles written by paid experts in each domain, and infrequent updates. Many of your other requested features fail with that model, but Wikipedia seems to have made strides in the direction you are advocating.
But you say the problem is the limited range of topics that Wikipedia allows? Interesting! Would you mind listing a few such sample topics where you would like to offer humanity's consensus (but Wikipedia disallows), in order to help your readers (or, at least, me) understand what is missing via some concrete examples?
I listed encyclopedias as one of the existing institutions. Wikipedia is quite limited in the topics on which it will speak.
We don't have to expose implicit models to create useful consensus.
Expert systems were not able to take into account the contextual nature of queries because the human experts were using many non explicit cues in order to navigate that contextual information. I'd say our definition of expert often includes the notion that they are doing much of their heuristic selection with System 1. Underestimation of the intractable size (exponential) of this parameter space led to the early failures. We've since made great strides in two key areas: raw compute and fast pruning of parameter space via intermediate results.One question is, how can we give experts prompts that expose implicit models? We have a clue from the prediction literature in the form of judgmental bootstrapping. Are there general principles at play in the way experts learn to prune a really big decision tree down to a manageable one that calls their attention to the key causal levers? How might an org leverage such knowledge to make money in areas where expert time and attention is limited?
Isn't Wikipedia a kind of "explicit accessible shared repository of what we all know"? Doesn't it offer its own method to "coordinate to create and update an accessible shared consensus on important topics"? I assume you find something missing from this existing example, but it isn't clear (to me) from your post, what about Wikipedia doesn't satisfy your vision.