Better living through predestination

In many religions there is a belief in ‘predestination’. While I am far from a religious scholar, predestination is roughly the idea that God has already foreseen and willed all future outcomes. Believing this threw up a curly problem for personal moral responsibility: if God had already decided who deserved to go to heaven, and who deserved to go to hell, why bother doing anything in particular? Your fate has already been sealed. In fact it was sealed long before you were born. But it turned out there still was a strong incentive to behave righteously, so long as you didn’t know which group you belonged to: every time you did the right thing, you were producing evidence for yourself that you were one of those destined for heaven rather than hell. Your virtuous acts couldn’t change the outcome at all, but they could still offer a huge relief!

The same is true for various health-affecting behaviours. My go-to example is flossing, which is correlated with a significant extension in life expectancy (e.g. this). How much of the extension is caused by flossing, and how much is due to flossing being associated with other things that improve health, like diligence? I doubt anyone knows. But all else equal, if you are someone who flosses, you should expect to live longer than someone who doesn’t. The correlation is what matters for that prediction, not causation. That sounds like a good reason to start flossing to me. Your flossing may or may not change anything, but it will give you a compelling reason to expect to be blessed with good health. The same goes for drinking in moderation, exercising regularly, and so on. So take this realisation, and use it to stay motivated to do the things you thought you should be doing, because the expected benefits are even bigger than causal studies make it sound. Incidentally, people who are convinced by this argument live on average two years longer, so I wouldn’t recommending dwelling on it too long.

Enjoy the Easter weekend!

GD Star Rating
a WordPress rating system
Trackback URL:
  • Ilya Shpitser

    “The correlation is what matters for that prediction, not causation. That sounds like a good reason to start flossing to me. Your flossing may or may not change anything, but it will give you a compelling reason to think you will be blessed with good health. ”

    I don’t think you want to use evidential decision theory, someone might take you to the cleaners.

    • VV

      While I think that this kind of arguments are probably a misuse of it, I can’t see anything wrong with evidential decision theory.

      • Ilya Shpitser

        HAART (the HIV drug) administered in longitudinal observational studies is positively associated with death of HIV patients. Therefore, we shouldn’t administer HAART.

        In fact, if you follow evidential decision theory in medicine, you will go to jail for malpractice.

      • IMASBA

        Isn’t medicine filled with “evidential decision theory”? If we could simultaneously model and understand every chemical reaction in the body we wouldn’t need clinical trials now would we? Many medicines “just work” and it’s still unknown how they work precisely. For many diseases we don’t even know how they work precisely.

        Another field where “evidential decision theory” is king (this time not because there is no suitable alternative) is business: basically if the new CEO wears lucky underwear and the stocks go up he will get a pay raise for raising the stock and people will write adoringly about his “success”. It doesn’t matter if the boost in stock is statistically more likely to be regression to the mean, the result of some change the CEO had no control over (resource discovery or crisis in some far away country for example) or the other personell of the firm collectively had a much greater impact on whatever changed, or he who sits at the top gets to take credit, it’s been that way since they called the Great Pyramid “Khufu’s (Cheops’) Pyramid” instead of “20.000 laborers, artisans and architects’ Pyramid”.

      • Ilya Shpitser

        I think if you decide based on RCTs (or even based on observational studies that try to match members of control and test groups based on baseline characteristics) you aren’t doing evidential decision theory anymore.

        I agree that businesses do a lot of stupid things.

      • dmytryl

        I think if you decide based on RCTs (or even based on observational
        studies that try to match members of control and test groups based on
        baseline characteristics) you aren’t doing evidential decision theory
        anymore.

        You’ve been learning about evidential decision theory on lesswrong, haven’t you? It’s sort of like learning physics on peswiki.

      • VV

        You should condition your beliefs on all the evidence available.

        I suppose that conditioned on a positive HIV test, HAART is negatively associated with death.

      • Ilya Shpitser

        It’s not, because of health status confounding. Doctors give HAART to people who are already very sick.

      • VV

        It’s not, because of health status confounding. Doctors give HAART to people who are already very sick.

        So conditional on a positive HIV test and being very sick, HAART is negatively correlated with death.

        Sometimes adjusting for too many things gives you bias in causal
        inference. (That is sometimes you want a causal quantity that is a
        functional that is not just a conditional probability, and in these
        cases conditioning too much gets you into trouble).

        I don’t really understand what you mean by this. Please expand.

      • Ilya Shpitser

        “So conditional on a positive HIV test and being very sick, HAART is negatively correlated with death.”

        Yes, but the point is, you don’t always observe all confounders. You observe a correlation based on whatever you had the money to record. Say you didn’t record health status but only what the doctors decided. So you might conclude based on that observational study that you shouldn’t give HAART to AIDS patients. You would be wrong.

      • dmytryl

        Yes, but the point is, you don’t always observe all confounders. You
        observe a correlation based on whatever you had the money to record.
        Say you didn’t record health status but only what the doctors decided.
        So you might conclude based on that observational study that you
        shouldn’t give HAART to AIDS patients. You would be wrong.

        Yes, and if it was lung cancer and smoking, you would be right. Or if the decisions were made in ignorance of health status, in which case the correlation between treatment and death would strongly suggest that the treatment is harmful.

        A suggestion: remove the loaded examples with ill specified implicit knowledge, and speak of Foo that is taken by people who subsequently die prematurely of Bar, and not taken by people who subsequently do not die prematurely.

        Foo may be HAART for the HIV positive, or it may be blood-letting or aspirin for the hemophiliacs, and in the latter case you’d be right to refrain from Foo.

        If you have a biochemical model where Foo is effective for treating Bar in patients tested positive via Baz test, this model provides evidence that the output of decision process, controlled for the inputs to the decision process (such as Baz test), correlates positively with not dying of Bar.

        It is not clear to me that evidential decision theory (non strawman variation thereof) acts any different from causal decision theory in most circumstances given knowledge of physics or the like. And in absence of such knowledge – well, it is arguably sensible not to presume causality. A formal theory of decision making, AIXI, does not come pre-packaged with the notion of causality; yet there are certain fairly good optimality arguments – which are entirely lacking for confused philosophical musings about ‘causality’.

      • Ilya Shpitser

        “A suggestion: remove the loaded examples with ill specified implicit knowledge”

        Look, this example is based on papers people actually wrote, which is based on a problem people in HIV actually have. The fact is, confounders are ubiquitous everywhere, and people have to deal with them.

        I am not sure what you are saying about EDT, but I think you might be saying that if you observe enough things you just get the “universal causal dag” and then everything is fine with EDT. Unfortunately even that is not true. First, a theory that needs a universal causal DAG is hopelessly doomed here in the real world, where patients need treatments and we don’t have time or money to measure everything. Second, even if you observe everything, you still need to figure out causal directionality, which you cannot do by observations alone due to standard issues with observational equivalence of different causal DAGs.

      • dmytryl

        > Look, this example is based on papers people actually wrote, which is based on a problem people in HIV actually have.

        Hemophiliacs were actually treated with bloodletting and aspirin, too. Unless there’s knowledge that lets you discern those two examples, you can only be correct by sheer luck.

        You need to state what the decision theory knows about Foo and Bar , and then we can see what decision theory does without introducing all the extra facts that decision theory does not do, and being able to invoke counter examples (e.g. what if Foo is bloodletting and Bar is hemophilia).

        > Second, even if you observe everything, you still need to figure out
        causal directionality, which you cannot do by observations alone due to
        standard issues with observational equivalence of different causal DAGs.

        You don’t need to observe everything. A formalized agent such as AIXI will, with few enough observations, get the gist of the causality, and use it. The problem with CDT, EDT, and so on, is that those aren’t actual theories, they’re very vague. (And causality very ill defined, e.g. see http://plato.stanford.edu/entries/causation-counterfactual/ )

      • Ilya Shpitser

        I don’t understand your first two paragraphs at all.

        AIXI (or anything else) cannot learn causality from observations alone due to standard impossibility theorems, similarly to how it cannot predict if a Turing machine will halt.

        You seem to have very strong opinions about whether something is EDT or CDT, if you think they are very vague. The definitions I am aware of are very precise (e.g. have math formulas in them).

        CDT is about counterfactual causation which was understood precisely since Neymann’s time (1920s), and is certainly understood very well now, almost a century later. Causal inference is not philosophy anymore, it’s a (vibrant and growing!) branch of statistics. I think there were 73 causality papers in last year’s JSM.

        I do recommend doing the reading I suggested above.

      • dmytryl

        Has math in it, you say…

        EDT uses P(O|A) i.e. probability of outcome O given your decision A (or that’s the way I see it) . That’s the math it has in it. The probability of outcome O given your decision A does not straightforwardly relate to an estimate obtained from a bunch of different people‘s decisions. Control for co-founders got to be par course with finding the probability of outcome given specifically your decision. I’m saying it is vague because you seem to see it differently and i’ve seen plenty of varying understandings.

        As for impossibility theorems in question, i’d need links. AIXI tries every computing code and uses those that correctly predict the observations, weighted by their length. It can learn anything computable.

      • Ilya Shpitser

        “EDT uses P(O|A) i.e. probability of outcome O given your decision A (or that’s the way I see it) . That’s the math it has in it. The probability of outcome O given your decision A does not straightforwardly relate to an estimate obtained from a bunch of different people’s decisions. ”

        Doesn’t matter. I can replace different doctors in my example with the same doctor, and then ask that doctor whether he thinks A1 and A2 help or kill patients. If he then uses either E[Y | a1, l1, a2] or E[Y | a1, a2] (which is E(O|A) that EDT advocates) he should go to jail.

        As for links google for Markov equivalence in DAGs, for instance this:

        http://www.multimedia-computing.de/mediawiki//images/5/55/SS08_BN-Lec2-BasicProbTheory_3.pdf

        or any standard textbook on causal inference or graphical models. As long as AIXI is only observing it runs into standard limits. Also (this is obvious but worth stating), nobody uses AIXI to make actual decision about drugs or anything like that. People use causal inference now and have been for at least a century or two (depending on how you count).

      • VV

        As for links google for Markov equivalence in DAGs, for instance this:

        http://www.multimedia-computin

        It’s well known that the orientation of the edges of a Bayesian network is arbitrary to some extent, but I can’t see your point.

      • VV

        Say you didn’t record health status but only what the doctors decided.

        Then you should consider health status as set of latent variables and explicitely model the doctors’ decision process to estimate them.

        Clearly this is easier said than done, which is the reason why the effectiveness of a therapy is primarily assessed by double-blind studies. The type of studies you refer to just provide very weak evidence.

        That’s not a problem of evidential decision theory.

      • Ilya Shpitser

        “Then you should consider health status as set of latent variables and explicitely model the doctors’ decision process to estimate them.”

        The doctors don’t use the underlying health status either when deciding, they use the observable effects of same (which are recorded in the patient’s file). Your advice to model arbitrarily complex latents is doomed: most such modeling will result in misspecification bias, or be hopelessly expensive.

        A better idea is to understand how to get unbiased causal effects when there is arbitrary unobserved confounders, but _you know where they are_. Luckily there are well understood methods for this. These methods aren’t “evidential decision theory” though, because all such a theory can do is condition. Conditioning sometimes gets you in trouble with confounding.

      • VV

        The doctors don’t use the underlying health status either when
        deciding, they use the observable effects of same (which are recorded in
        the patient’s file).

        The doctors’ decisions are evidence for the underlying health status, as patient files are if they are available to you.

        Your advice to model arbitrarily complex latents is doomed: most such
        modeling will result in misspecification bias, or be hopelessly expensive.

        Yes, and as I said, that’s why double-blind interventional studies are used.

        If your study just records doctors’ decisions and outcomes I don’t think you can even feasibly distinguish between an effective therapy and a therapy that is only as good as a placebo, or even one that does more harm than good. If rather than HAART it was homeopathy or bloodletting, what difference would you expect?

        Luckily there are well understood methods for this. These methods
        aren’t “evidential decision theory” though, because all such a theory
        can do is condition. Conditioning sometimes gets you in trouble with
        confounding.

        Clearly there are decision problems where it is more cost-effective to develop problem-specific decision procedures which don’t involve an explicit probability distribution estimation step followed by an explicit expected value maximization step. For instance, provided that your dataset contains enough information, you could use an opaque machine learning method, like a neural network, to learn a direct evidence-to-action mapping. I suppose that in addition to black-box methods there are specialized (and lawsuit-friendly) methods for medicine.

        This doesn’t mean there is anything intrinsically wrong with evidential decision theory. You are just approximating it to make your decision process tractable.

      • Ilya Shpitser

        ” For instance, provided that your dataset contains enough information, you could use an opaque machine learning method, like a neural network, to learn a direct evidence-to-action mapping.”

        You still don’t get it. Here’s the example study:

        We randomize a treatment, call it A1. We then wait and measure patient’s vitals, that’s L1. Then, based on these vitals the doctor gives (or not) some additional treatment A2. Finally, we measure the outcome (is the patient alive or not?), call it Y.

        We want to know if A1 and A2 are killing patients or not, given that L1 and Y are hopelessly confounded. Here’s what we can do:

        (a) Look at E[Y | a1, a2, l1] (this is what EDT suggests). This is completely wrong (you will get bias, e.g. go to jail for malpractice). It doesn’t matter if you use a neural net or a support vector machine, or a non-linear regression to figure out this mapping, it will still be garbage.

        (b) Look at sum_{l1} E[Y | a1, a2, l1] p(l1) (this is the standard ‘adjusting for confounders’, and what people did for a long time in these cases). This is _also_ completely wrong, but already this would not be EDT anymore.

        (c) What you have to do is look at this:
        sum_{l1} E[Y | a1, a2, l1] p(l1 | a1). This is the so called ‘g-formula’, and this will get you the effect without bias. If you don’t understand why, I suggest a google for ‘g-formula’ or perhaps a good read of Judea’s book, specifically chapters 3 and 4.

        The point with EDT is you just act based on maximizing a functional of conditional probabilities. If you are randomizing, or if you are adjusting for confounders, or you do the more complicated thing in (c) correct for longitudinal studies, you are interested in a causal connection between your action and the outcome, so you aren’t doing EDT anymore. I mean you can call it EDT if you want to, but it’s really CDT by a standard definition. You can check wikipedia, or any standard textbook.

      • VV

        I can’t really follow you. You keep saying that using E[Y | A1, A2, L1] is improper, that should send you to jail, but you don’t provide an argument for that.

        Note that, due to the linearity of the expectation operator, the other formulas you provided are also computations of the expected value of Y, with respect to different conditional probability distributions. Are you sure you are not confusing the true conditional probability distribution p[Y | A1, A2, L1] with its many possible estimators (which may or may not be biased depending on confounding variables and so on)?

      • Ilya Shpitser

        No, I am pretty sure I am not confusing anything.

        sum_{l1} E[Y | a1, a2, l1] p(l1 | a1) will give you the same number as if, instead of listening to L1, the doctor randomized both A1 and A2 (for an infinite population of patients). In this sense it gives “the causal effect”. The functionals in (a) and (b) will not give you the causal effect for this study (e.g. will give you bias). If you think (a) (b) and (c) are all the same functional, you need to do some reading on basic probability.

      • VV

        So, assume I’m a doctor and I have a patient who has been already administered treatment a1, and now has vitals l1. I have to choose the second treatment.

        You are saying that I have to use a formula that ignores the value of l1. This is clearly absurd. I would love to see a reference to a guideline from the FDA, the NIH or whatever other medical authority supporting your point.

      • Ilya Shpitser

        What you are describing is this:

        http://en.wikipedia.org/wiki/Dynamic_treatment_regimes

        What you have to use there is a variation of the g-formula with a policy. I suggest reading any references by “Robins” linked by above wikipedia article.

        See also this sentence in the article: “The use of experimental data, where treatments have been randomly assigned, is preferred because it helps eliminate bias caused by unobserved confounding variables that influence both the choice of the treatment and the clinical outcome.”

        If you can’t use experimental data, you have to use the g-formula instead (assuming your study satisfies certain conditions) to eliminate this bias. EDT doesn’t even know what “confounding” is, as it has no language to talk about causal concepts.

        The policy uses L1, but in a very particular way. It is certainly not EDT.

        The FDA and the NIH use RCTs to establish effects. G-formula will give you the same answer as an RCT in the example I gave. Anything that isn’t will give you garbage instead.

      • VV

        Read the “Mathematical foundation” paragraph of Wikipedia article you cited. That’s the typical textbook version of the EDT formula.

        The article mentions the difficulty of inducing the optimal policies from the data due to confounding variables, but it makes clear that this is an estimation problem.

        You keep conflating estimation theory and decision theory. While actual algorithms may combine them, computing actions from data, estimation and decision are conceptually different problems.

      • Ilya Shpitser

        You are confused. Dynamic treatment regimes necessitate a causal connection between the policy and the outcome. They are defined, ultimately, in terms of counterfactuals, see for instance:

        http://www.stat.lsa.umich.edu/~samurphy/papers/DTRbookchapter.pdf

        http://www.rss.org.uk/uploadedfiles/userfiles/files/Didelez_RSS_gR_new.pdf

        etc.

        EDT doesn’t even know what those counterfactual things _are_. I am not sure you really understand the difference between CDT and EDT (there is more going on here than just “oh there is an expectation and a conditioning bar, therefore it’s EDT”). So far, every clearcut example of the use of CDT you classified as EDT. I can only conclude that to you the set of things under the heading of CDT to you is the empty set.

      • dmytryl

        > You should condition your beliefs on all the evidence available.

        Precisely. Likewise, in the OP’s example. The decision to floss, or not to floss, is produced based on preferences, and produces no new evidence what so ever.

      • Stephen Diamond

        Ilya,

        A question for clarification. Do you contend that the procedures Wiblin recommended in his untimely April Fool’s joke would be endorsed by competent decision theorists of the evidential school?

  • http://overcomingbias.com RobinHanson

    If it is good to do things so that you have evidence that allows you to expect good things, even when your actions don’t causally influence the outcomes, why isn’t it better to just expect good things even when you don’t have supporting evidence? You get to feel good without all the work. So skip the flossing, and just expect to live longer anyway, right?

    • http://twitter.com/srdiamond srdiamond

      I think you miss the point because the answer is straightforward: you should do things to feel good only when it helps you do things that are good. (You shouldn’t do things that feel good when it causes you to do bad or indifferent things.) The point is to do good things, not to feel good, as you would have it.

      • Charlie

        If we suspect for the sake of argument that Robin has a point, what would his point be?

      • http://twitter.com/srdiamond srdiamond

        Were I to suspect that you have a point (which I do), I would surmise that your point is that I’m being uncharitable to Hanson. Shall I suggest that the principle of charity is a bit overdone in our circles (perhaps to accommodate Eliezer Yudkowsky’s writing style).

        I think Robin was vaguely gesturing toward the illogical character of Wiblin’s post, and readers tend to be “charitable” when they agree. (For example, I agreed with a post misusing the label “evidentiary decision theory” because I agreed with the apparent message without looking closely at what it actually said.)

    • Norman

      ‘Expectations’ is not a choice variable.

  • philh

    To actually do a bayesian update in the direction of “the probability distribution on my lifespan has increased in expected value”, it seems you would need to be flossing *before* you learned that flossing is correlated with long life. Or at least not *because* you learned that.

    • http://www.gwern.net/ gwern

      If you aren’t flossing, and you learn that flossing correlates with lifespan but there is no data on whether this is merely correlation with something like Conscientiousness or actual causation, then it seems to me that as long as there’s a nonzero chance that the flossing is correlated, you’re better off flossing. All the times the true relationship is correlational, your flossing will be wasted, yes, but all the times the true relationship is causal, then you’ll benefit.

      eg. imagine that the base rate of ‘correlations turning out to be causations’ is 1% and flossing increases your life by 10 years and will cost you 1 day of of time, then the expected value is (0.01 * 10) – (1/365.25), which is greater than zero.

      And since the expected value is positive, then you would indeed increase your lifespan estimate after starting flossing.

      • Ilya Shpitser

        So is this like a minor version of Pascal’s wager for minor but time-wasting rituals? I mean I agree that flossing is beneficial, but this type of argument can also apply to all sorts of things that correlate with conscientiousness and lifespan, and that are a minor waste of time.

        So someone might “mug” a conscientious person who follows this rule by making their free time die a death of a thousand cuts.

      • http://www.gwern.net/ gwern

        > So is this like a minor version of Pascal’s wager for minor but time-wasting rituals?

        Yes, it’s exactly like Pascal’s wager! Well, if you bring in gods, infinities, non-empirical base rates and all sorts of other things. -_- I applaud your contribution to the ongoing project of many people on both LW and OB to drain the phase of any meaning and turn ‘Pascal’s wager’ into a buzzword to pejoratively dismiss any use of expected-value you dislike.

        > this type of argument can also apply to all sorts of things that
        correlate with conscientiousness and lifespan, and that are a minor
        waste of time.

        No, it only applies to ones with large enough correlations which combined with large enough chances of turning out to payoff to make it worth while. Just like when we’re pondering any other course of action – regardless of whether we’re discussing invasive surgeries with differing risks, or things which may only be correlations rather than causation. Same thing: we are making decisions under uncertainty and attempting to maximize our gains.

      • Ilya Shpitser

        Ok, but you now went from

        “it seems to me that as long as there’s a nonzero chance that the flossing is correlated, you’re better off flossing.”

        which sound suspiciously similar to “as long as there is a non-zero chance that [crazy fact], you’re better off [crazy action].”

        to “large enough correlations” with “large enough chances of turning out to payoff.”

        I have no problems with that sort of consequentialism if your model linking correlation to % chance of causation is reasonable.

      • http://www.gwern.net/ gwern

        > Ok, but you now went from

        ಠ_ಠ No. I did not. If you had read my comment, I even included a worked out example with numbers of a expected-value calculation demonstrating my point.

      • Ilya Shpitser

        Yes, if you learn from someone that flossing correlates with lifespan, and that someone has a believable model of how often correlations end up with a partly or fully causal explanation, and if you integrate over all that and end up with a positive expected utility, and there is no easy/cheap way to get better causal information, then yes you should believe them and floss.

        I am curious, you do a lot of self-experimentation. Do you actually do this kind of calculation in your life? Have you ever changed your behavior based on something like this?

      • http://www.gwern.net/ gwern

        > someone has a believable model of how often correlations end up with a partly or fully causal explanation

        To go off on a tangent, I think one could try to derive a base rate from various meta-analyses of how often early correlations or associations are borne out by experimental trials; I suspect this has already been done for various psychology or medical fields, since I have some similar meta-analyses in my usual appendix.

        > Do you actually do this kind of calculation in your life? Have you ever changed your behavior based on something like this?

        You mean have I ever started or abandoned an activity based purely on correlational evidence? It’s hard to say; the examples that come to mind, like taking vitamin D or, well, flossing, I think there either probably were experimental evidence (quite a lot in the case of vitamin D’s health benefits), or I could create my own evidence (like switching my vitamin D consumption from randomly through the day to upon awakening).

  • http://twitter.com/srdiamond srdiamond

    I’ll grant that if you can fool yourself into flossing, you’re probably better off (that is, if you can apply this logical fallacy in isolation–which might be something of a miracle). The problem with your personal calculations is that it will cause you to over-invest in a behavior because you overestimate how beneficial it is. This problem doesn’t much affect flossing because of its triviality once a habit is formed, but if you believed that the more you flossed the better (and if this were in fact true correlatively), you might seriously distort your investments. Some health benefits will probably be like this–cumulative and involving large time investments (e.g., exercise). There, your self-deceiving calculations will stand to wreck havoc with your utility function. (See Ilya Shpitser’s comment.)

    • lump1

      Wow, an April Fools joke so devious that it actually occurs in March! I guess that’s why I didn’t see it coming! Well played!

      • http://twitter.com/srdiamond srdiamond

        Maybe Wiblin will start a trend: pre-April Fool’s jokes. Maybe we’ll see people trying to get the jump on April 1 the way merchants start advertising for Christmas right after Thanksgiving. Whoopee!

  • Dave Lindbergh

    “people who are convinced by this argument live on average two years longer, so I wouldn’t recommending dwelling on it too long”

    I think it’s a April Fools joke that you all aren’t getting.

  • dmytryl

    The outcome of your reasoning about flossing doesn’t provide you with any extra evidence about health on top of what goes into your reasoning process.

    When you end up flossing, in the deterministic universe, that does establish the fact that at big bang the conditions were such that Robert Wiblin decided to floss. But it doesn’t create any new evidence that Robert Wiblin is a health conscious person that is going to live a longer life.

    • lump1

      Maybe the flossing itself does nothing useful health-wise, and all the positive correlation was from being more of a “follow routines and heed the advice of health experts” kinda guy. But then again, maybe you have some choice about whether you have that trait – surely it’s not 100% genetic – and flossing one way to get started on an allegedly healthy routine. And once you start flossing, maybe it’s easier to incorporate other routine activities (like wine drinking?) that actually do cause health improvement. So then flossing will cause a health improvement, but not by doing anything for your health. A dream journal would have been equally effective.

      • dmytryl

        Yes, it is probably the case that flossing is beneficial. You’re talking of a causal consequence, not of correlation, though… Wiblin is talking of correlation, and flossing is an example of his choice which hides his error.

        Consider a ritual Foo which people decide to do or not do, correlated with outcome Bar. The correlation may be a result of common factor influencing both the decision and the outcome – e.g. people who want Bar do Foo, and then attain Bar by other means. In this case, knowing the desire for Bar, the Foo does not provide any extra evidence that Bar would be obtained.

        For a specific example, consider medicine in the past. People who use some Mercury Based Cream on their face are rich, care for their health, and live longer than people who do not use Mercury Based Cream, even though the cream is actively harmful and such harmfulness could be inferred from available data if only you control for confounding variables that influence the decision.

  • Robert Koslover

    Flossing is good for your teeth and gums, says my dental hygienist. And she really ought to know, since keeping teeth clean and gums healthy are her business. Why not simply trust her advice, until or unless someone else you believe to have equal or better knowledge disagrees? And clean teeth/healthy gums are at least one part of maintaining good health, right? So that just might help you live longer, right? And even if not, the people you leave behind when you pass on will be more likely to remember you as having a nice smile. :)

  • http://www.facebook.com/marc.geddes.108 Marc Geddes

    OK, so…

    ‘You are presented with two boxes, one transparent (labeled A) and the other opaque (labeled B). You are permitted to take the contents of both boxes, or just the opaque box B. Box A contains a visible $1,000. The contents of box B, however, are determined as follows: At some point before the start of the game, the Predictor makes a prediction as to whether you will take just box B, or both boxes. If the Predictor predicts that both boxes will be taken, then box B will contain nothing. If the Predictor predicts that only box B will be taken, then box B will contain $1,000,000.’
    What do you do?

    • VV

      That’s the easy version. I prefer the one where both boxes are transparent :)

      • lump1

        Why does it matter, unless you’re worried that the opaque box has a cobra or something? If the only thing that could be in it is money, and you know that taking it won’t *cause* any ill effect, I don’t see why you’d just leave it.

      • VV

        If you see that both boxes contain money, will you take just one?

  • Curt Adams

    You should try an example besides flossing, because there’s a fair amount of evidence flossing does improve health – big improvements in dental health after introduction of modern dental hygiene techniques, direct experience of dentists, and a plausible mechanism (flossing leads to fewer gum infections and a reduced chance of pathogenic bacteria getting into the body). There’s no slam-dunk RCT, but still it’s pretty good evidence. I can’t think of any at the moment, but I know I’ve seen things associated with lifespan that make the think it’s most likely a correlational effect. Something less likely to be a direct benefit would be a better frame for thinking about your idea.

  • lump1

    I think you have it exactly backwards. The things that you should get yourself to do are the things that *cause* good results. The standard example is this: If you really like smoking and it’s 100% harmless, you should do it, right? And this answer wouldn’t change if there was a gene that causes both a desire to smoke and cancer. People without the gene are not interested in smoking, and never get cancer. Sure, you want to be one of those people, but whether or not you are, your smoking won’t rewrite your genes, so you’d be stupid to not light up if you feel like it.

  • Stuart Armstrong

    >That sounds like a good reason to start flossing to me

    That’s a very bad reason to start flossing – the worst, in fact. Flossing is only evidence for underlying health if it is done for reasons correlated with health. But starting it for other reasons completely destroys its use as a signal.

  • Pingback: Free Parking for the Religious in This British Town, but Atheists Are Out of Luck