Here is our monthly place to discuss Overcoming Bias topics that have not appeared in recent posts.
Exponential growth is typically limited in some way, so that it becomes "S"-shaped, i.e., it approaches an asymptote. Such trends follow the differential equation: dN/dt = k•N•(1-L/N), so as N approaches the limit, L, the factor (1-L/N) approaches zero, thereby stopping the exponential component's influence in the simpler equation dN/dt = k•N, whose solution was exponential. The "S"-shaped curve is pervasive in technological change, and to use simple exponentials is to argue that there are no limits to growth.
A good rule of thumb is: All trends have limits.
What does this mean? Anything rational? Risk vs. Gamble? Gamble here means taking a very large risk past a certain threshhold? Since this guy is a grad student in journalism, I think he should be held to a high standard in terms of lanugage use.
"In his closing arguments last week, the prosecutor, Charles Testagrossa, said, “We ask the police to risk their lives to protect ours.” I agree. But they shouldn’t have to gamble with them.
Kyle K. Murphy, a former lieutenant in the New York Police Department, is a graduate student in journalism at Columbia."
Any overcomingbias headliners doping for cognitive advantage? There've been some articles on that recently in the media. Thoughts on those articles?
I should add--anonymous, I would mention to your acquaintance that one of the many empirically demonstrated ways in which real people depart from Bayesianity is that "naive subjects do not distinguish between p(A|B) and p(B|A) in most circumstances" (Rational Choice in an Uncertain World). These can be very different quantities. So it might be the case that P(A|B) is high and P(B|A) is low, and a false stereotype forms that "As are generally Bs."
Anonymous, it makes me sad for ideological reasons too, but your acquaintance is correct: it is rational to use information about average group differences to make probabilistic inferences about an individual.
Of course, as you acquire information about an individual, the relevance of group information in the form of stereotypes quickly vanishes--and of course we need to worry about inaccurate stereotypes, and the possibility of stereotypes influencing behavior to the point of becoming "self-fulfilling prophecies"--but the basic point, that it's not wrong to generalize if you do it right, holds.
Anonymous - was your friend attempting to morally justify, or just epistemically justify stereotyping? I don't think Bayes has much to say about the former issue. See my old post on Racial Profiling (and especially Blar's comment) for further discussion.
Hmm ... I seem to have posted this on an older open thread by accident. I'll repost it here.
I'm wondering if Bayes' theorem can be used to justify racial stereotyping. I'm hoping it can't, for ideological reasons obviously, but someone I know recently made what seemed to be a reasonably strong case for it, which I'm finding difficult to refute because I don't have a solid grounding in probability, but he does (or at least claims that he does).
If this is a misconception, please dispel it for me so that I can refer my friend to your explanation (that's right: I'm asking the Bayesian gods to do it for me. Sorry, I'm just very busy right now, and I figure you're up to the job, and this is a place where more people will see it).
Hopefully, I'm surprised you see my blog posts as obviously libertarian, especially on medicine. I have almost never taken positions here regarding more versus less government intervention.
I think even asking the question is considered to be 'libertarian'; good liberals/conservatives don't ask whether the policies they favor are effective, they're supposed to know they are. Asking implies doubt and skepticism, which are sins.
Apparently there is a completion bias, in which we're more motivated to complete something close to finished, then to spend our energy in other areas that would benefit ourselves more. I'm concerned that completion bias may play a role in how scientists and thinkers are rewarded (are they rewarded more for finishing something, then sharing unfinished work? I don't know of prestigious publication venues for partially finished work.)
Thus, Aleks Jekulon (sp?) in my opinion deserves praise for putting a lot of unfinished work of his on the internet for the rest of us to look at. Thoughts from the regular OvercomingBias contributors about completion bias and how scientists and thinkers are rewarded?
Aleks Jekulon's website as a model:
Robin, nope. I interpret you as doing a little dance where you seem honest about the data, but always seem careful to put some effort of framing the discussion of the data such that your posture is "here are ways that with less government regulation/funding we can maintain or improve on the health status quo". Even though the data that you seemly to fairly present often seems equally framable as "here are some ways in which government regulation/funding DOES get better results than the lack of it".
It seems, in my opinion to be dialectic-seeking, rather than enlightenment-seeking posturing. Sort of like, audiences seem more attracted to media that has a classic oppositional/dualistic undercurrent, such as individualist vs. collectivist, and often award more status and attention to participants in these dialectics, over people who put out media that seems more limited to empirical inquiry and problem-solving. I suspect this is a deeply rooted primate aesthetics. Perhaps an intuition that it's more important of observing two alphas battle it out, and of choosing the right one's side to be on, than it is to observe one's natural environment and make decisions of how to survive in it without considering one's relations with dominant and challenger alphas.
If I get a chance I'll try to rewrite some of your recent and classic posts on this topic, as I imagine they could be without what I think is the nonexplicit libertarian/individualist/small govt. etc. posturing.
ps I ended up adding here and there to this post, without a lot of time to edit it overall, so my apologies for any disorganization or incoherence.
Hopefully, I'm surprised you see my blog posts as obviously libertarian, especially on medicine. I have almost never taken positions here regarding more versus less government intervention. Do you, like James Hughes interpret my lack of explicitly endorsing more regulation as an endorsement of less regulation?
I'd like to request Eliezer do a series of posts intuitively explaining financial engineering investment algorithms, like the type used by Renaissance Technologies. It's right up there as popularly inscrutable applied math with quantum mechanics and bayesian statistics.
In response to this post by MTravern:
"Not only male, there is a heavy bias towards libertarianism (those two biases are not independent). Does this bother anyone, or are people happy with the default assumption that libertarianism is the path to objectivity?
The left has its own way of fighting bias (ie, Critical Legal Studies), which is very far removed from the style here. I wonder if there is any chance of fruitful discussion between these viewpoints?"
Yes, the bias towards libertarianism here bothers me. In particular how Robin seeks framing health care/policy bias posts in the context of "hey, I'm kind of making this libertarian argument that less government funding/regulation of this or that would actually improve health or not harm it" on pretty much every post. At least it would be fun to see some self-critical meta-transparency by the regular OB contributors about how their biases warp their posts and their attempts to overcome bias. And critical legal studies (and its equivalents in the other social sciences, which heavily inform it) would be useful here. Particularly various analyses of economists, philosophers, physicists, and economists themselves, looking at how bias can skew scientific inquiry and the presentation of results.
are people happy with the default assumption that libertarianism is the path to objectivity?
It's the other way around. The abandonment of certain biases is one path to libertarianism.
I've heard that some very experienced judges have said that, the longer they are on the bench, the more respect they gain for the decisions of juries.
The same phenomenon seems to occur among Tarot card users: the longer they go on using the cards, the more they believe in the wisdom of the cards.
See: John William Waterhouse: Ulysses and the Sirens - 1891.
The feed at http://www.overcomingbias.c... , which is in the feed-autodiscovery links in the HTML head, has silently stopped updating; the most recent entry in it is "Angry Atoms". I only noticed this when I happened to realize I hadn't seen any posts from this site recently; I expect many others will not have noticed.
At a minimum, it should be deleted (rather than remaining stale); better, it should be repaired.
I don't know if it's too late to post on this thread, but . . .
One thing I have tried to get people interested in, is the apparently major influence of the choice-supportive bias in the current Democratic Presidential primary battle. Obama and HRC seem to have very similar political positions, and initially many, many Democrats didn't have strong feeling either way; there were lots of people saying "I'll vote for either one." But, gradually, people made decisions (somewhat arbitrarily, and based on maybe one or two factors - anchoring?), and over the months that followed, people's choices cemented. Now a huge percentage (double digits) of supporters of either candidate say they won't vote for the other Dem. Their choice between two similar candidates has become so entrenched that they won't even support the other Dem over a Republican, who is in reality MUCH more different from their beliefs than the alternate Dem is!
Also, bonus, I am female.