13 Comments

I would like to apply this to Bayesianism in rationality as well. I think many people have read Yudkowsky on Bayesianism and their takeaway was to have coherent probability assignments. When I think you should pursue more numerical input data and numerical theories, which you can use as a basis of calculating numerical probabilities. That's how I think you get the benefits of probability theory in your thinking.

(With that said I agree with Yudkowsky that qualitative conclusions about reasoning can be derived from probability theory, and Yudkowsky would agree with me that you should seek numbers and that there's no point to coherent probability distributions using made-up numbers. I have no disagreement with Yudkowsky himself, but do disagree with many who have read him.)

So, my take on Bayesianism is also "it's mostly about measurement."

Expand full comment

I've heard about paying for results in mainstream discussion, in the context of medicine (I think under the name "basing compensation on outcomes"). The idea seemed to involve hospitals reporting their own outcomes, and then being paid based on their own reports. Sad that this is the only form of the idea I've heard about in mainstream sources.

Expand full comment

The National Enquirer used to pay for information. Traditionally, paying for information was taboo for journalists; and yet the National Enquirer ended up with a stratospheric circulation and a lot of true information (albeit it trivial, yet interesting to the general public, information about celebrity sex lives). The Enquirer, along with other cheque book journalists, created a market for information—and it got results. My hunch is that a lot of the information they got this way was sound; perhaps journalism would be better if all sources were paid. In such a system, knowing the going rate for sources and different types of information would tell you something interesting.

Expand full comment

This is now one of my favorite OB posts.

I feel like this take on science "it is mostly about measurement" is unusual and important.

Expand full comment

Yes. We need to be as skeptical of these ideas as any others, for there is rarely a case of objectively superior ideas, but often of ones less bad in certain regards while worse in others with as much concern over how they are worse as over how they may be better.

Many are gullible but usually less often with experts than with flatterers and con men telling them what they want to hear and make them feel good about themselves.

Expand full comment

The default skepticism reminds me of the "Trust but Verify" heuristic, or its inverse "Doubt but Verify". It is too costly to verify every assumption so it makes sense to start with a position of Trust/Doubt based on your intuition but then try to verify when it is convenient to do so. The problem with deferred verification is that the timelines often fall outside of our mental ToDo list. We often forget to verify and/or revisit our working assumptions.

Perhaps the data associated with the scientific and industrial revolutions can be thought of as one form of objective verification.

Expand full comment

That's a pretty big leap from communism didn't work to people prefer to be paid based on measurable output in modern western economies...

Clearly it depends on a number of things what people prefer.

For not great jobs, people only prefer it if it means they're reasonable well paid. Having to work hard just to make minimum wage isn't typically something people are into.

Expand full comment

Question for Robin: as you say, a problem here is ability to measure and we're getting better at it.

How would be your opinion of communism change if we developed technology that could give us better information than the market? (Noting technological objection to central planning)

Expand full comment

Any such system will be game-able to some extent or other. The question is, I guess, to what extent? And to what extent they cost us in intrinsic motivation?

People are motivated to do a good job for its own sake to some extent. In my experience anyway.

Expand full comment

I was just commenting on what people do, not should, want. In the U.S. at least, my guess is that given the choice, fewer workers would opt for increased than decreased measurement of their own productivity & quality, so for better or worse most workers as workers will increasingly resent Science 2.0 as its capabilities expand.

As consumers, though, they may love it.

Expand full comment

The old Soviet Union had a saying "We pretend to work and they pretend to pay us". Seems you are saying most people want that deal. But the collapse of the Soviet Union suggests otherwise.

Expand full comment

This seems like the kind of thing a typical person mildly prefers be used on others and intensely prefers NOT be used on themself. So a pretty hard sell. Even addressing this problem in a single domain can be an epic challenge (teachers' unions, police unions).

Maybe a way to get the ball rolling is to develop and apply/track a. Science 2.0 metric quantifying the extent to which the kind of approach you describe here already occurs, in many contexts, to see whether it correlates with various kinds of social benefit.

Expand full comment

But of course prestige doesn’t obviously induce a lawyer to win our case or promote justice, nor a doctor to make us well. Or a reporter to tell us the truth.Winning our case and making us well is a private benefit to us. Promoting "justice" is indeterminate and may or may not help the client. And as you yourself have noted, the consumers of news may prioritize aspects of it other than accurate info.

Expand full comment