27 Comments

Excellent point - to which I'd add that it's very common for people to give advice along the lines of "Do something I already do" with the implication that it's optimal - and living optimally must also be high status.

Expand full comment

 People've suggested it. I'd like it to be more comprehensive, and more actionable; I think one should be able to derive specific estimates of how likely a study is to have found something real given its field, effect size, data availability, sample size, etc, which would be really cool to have in hand.

Expand full comment

Thank you. This is not immediately actionable upon, but nevertheless is so much better than raw search results, or what I could come up with within couple of hours. Have you considered reformatting this appendix into a standalone essay?

Expand full comment

Small and new firms do huge numbers of experiments.  Larger and older firms lack the ability to easily transmit information *within the firm* (transmission of information is hard) but still do huge numbers of experiments locally.

Expand full comment

 > You can't blind experiments on yourself (usually)

You can self-blind a lot of things; for example, supplements are generally pretty easy to blind.

> and there is no control group.

If the effects are not long-term or permanent, you are your own control group (a "within-subject" design); you have multiple periods where the intervention is done or not done and you collect data in each, and at the end do the analysis. (This obviously doesn't work for something whose effects might last for years or the effect can only be observed by dying, but that excludes a great many things.)

Expand full comment

 Literatures don't really exist in single locations, that's why they're 'literatures' and not 'books'...

I don't know of any single source giving data on all of them with hard citations, which is one reason I started compiling http://www.gwern.net/DNB%20... You might or might not find it useful.

Expand full comment

> We are all familiar with the literature on meta-analyses, replication, randomization, and blinding, yes?

I'm familiar with general ideas, and I feel that I understand why the best practices are the way they are, but when I'm trying to think of specific sources where they are rigorously backed, I'm drawing blanks.

I don't mean to impose, but I would appreciate a few pointers. Yes, I've stopped and used google scholar before asking.

Expand full comment

Experiments that were "obvious and cheap" in the past have already been done, and their results have been incorporated into usual practice.  There is nothing to gain there.

However, this leaves three promising areas:1. Experiments that are now obvious and cheap due to new technologies.  Depending on degree of obviousness, you're likely to have lots of competition, but this is a way to get to the cutting edge.2. Experiments that are not obvious to others.  If you know things your competitors don't, you may be able to capitalize on that.3. Experiments that are not economical for others to perform, but which you can capitalize on due to unusual scale and/or efficiency.

Expand full comment

Part of Seth Roberts' idea is to notice if your life temporarily gets better or worse, and then experiment to find the causes. 

Expand full comment

 So? He has an existence proof that there are areas that under-use experimentation. It's not then a stretch to suggest that there are similar areas with wasted expenditures and missed opportunities - if you want to talk fractions of the economy, evidence-based medicine ought to be near and dear to your heart.

Expand full comment

Those experiments are still a TINY fraction of the economy. 

Expand full comment

 Isn't that precisely the argument of Jim Manzi in _Uncontrolled_? And he apparently is well-off thanks to a company built on making it easier for businesses to run lots of experiments, on top of the reported successes of A/B testing in online websites like Google or Amazon, which suggested that there were in fact huge profits to be gained in those areas.

Expand full comment

The cost of sharing information isn't going to be the same for everyone. It wouldn't surprise me if the people who are conscientious enough to do good experiments on themselves are also likely to be conscientious enough to be meticulous about describing what they've done accurately. This is enough work that many just aren't going to do it. Even if they're public-spirited, they still might be uncertain about how many people their cure is good for.

Expand full comment

I'm with gwern. You can't blind experiments on yourself (usually) and there is no control group. Plus the bad data problem. Knowing how many experiments that get published are crap, I have very little trust in others' ability to do sound experiments. 

Expand full comment

What is weird is that you frame your deviations from habit as  life "experiments" and dress them up with the trappings of science (e.g. recording the results in an overly detailed and time consuming manner or adhering to the new habit with extreme regularity).  That's all the needs explaining.

Normal people are also constantly seeking out new pleasures and trying to optimize various health and work things.   But they frame it (to themselves and others) in a less autistic way, such as:  "I've been cutting back on red meat lately....I've decided to go to the gym three time a week...I am going to try to stay more organized by writing to-do lists," etc.

My guess would be that you invest so much effort in taking this normal process to extremes because you have lots of free time or are trying to signal your eccentricity.

Expand full comment

Applied to business, this arguments says firms should do lots of experiments because they can't believe anything else tells them about what works in their industry. Since firms don't seem to do this, this argues that there are huge profits to be gained by deviating from the usual practice.

Expand full comment