I'd wager you have at least a few Talebian types amongst your followers who are allergic to the notion of IQ. If you'd had intelligence or intellect instead perhaps it would have scored higher as a category?
With regard to the question of why, in the initial poll, IQ and credentials are so low, especially relative to conversation style: Suppose they are seemingly in conflict. If a person of high IQ is nevertheless failing to make a good case for their point of view, and/or clearly knows little about the matter, it seems rational to suspect that, no matter how likely they are to be right generally, they may be mistaken about the specific matter at hand.
Maybe, most of the time, high IQ or credentials go hand-in-hand with informed and well-argued propositions, but in those cases, one does not have to choose between mixed signals.
Also, of course, rationally one should ask if one's own assessment of the quality of their argument or knowledge is accurate, but questions like those of this poll do not allow for such nuances. Interestingly, even over a matter that you do not personally know much about, you can often get a shrewd (though fallible) idea of whether another person does, by the way they speak of it.
If your poll respondents were selected from among those who follow you (here on your blog or anywhere else), I'd bet good money [#1 indicator that I'm worth listening to ;) ] that selection bias outweighs all other statistical factors that could possibly be used to evaluate the data.
Put another way, there's likely very little resemblance between what the average person thinks is persuasive and what the average reader of your selected topics think are persuasive. Of course the term "average person" is ambiguous, but at the moment I can't think of any definition of the term for which that sentence wouldn't still be true.
That said, even among the most rational people (i.e. your survey respondents), what people believe is persuasive to them and what's actually persuasive are often quite different. Because persuasion is about emotion, not logic, and confirmation bias will prompt even the most rational people to rationalize in order to make the supposed logic conform to the actual emotional response.
So, I expect that if you change your survey to use vignettes, you'll get a little closer to what might be considered normal. But I still doubt you'll get anything truly useful until you specifically define what population you'd like to learn more about, and then figure out a way to get a statistically representative sample of that population.
I had a closely related set of thoughts a couple of years back which I condensed into a piece "How to tell who is right without argument". It was a list of heuristics like this: I recognise many of my items (Turing test, Rational style, Hearing rival arguments....etc) and quite a few new ones I didn't think of (mostly the psychometric scores and willingness to bet - I deliberately avoided credentials).
I'm pleasantly surprised to find my inclinations seem to be reflected in this survey, so I think Robin may be a bit harsh here; I would genuinely use these as heuristics if I didn't have time to dig through the arguments in detail.
And why is this so different? It actually isn't. You are just being fooled because these are identifiers rather than distinguishers. Who hasn't heard of someone with credentials spouting nonsense, but how many of them are employed in field, academic or otherwise, how many have many years experience in it, how many are highly ranked by their colleagues, and how many have earned awards and prizes? It isn't credentials that are important except as proxies for these and their absence as suspicious. Positions are proxies for experience and predominate. Status itself is such a proxy. Talk style is dependent on their capability to persuade and our ability to understand or be persuaded when we can't. It is more a matter of experience discussing it though can go awry when being conned or conning ourselves when it is something we want to believe, or knowledgeable but dealing with disparate audiences. A larger problem is many don't care who is right, but who is right for them, wanting what they want. All manner of sins are excusable then.
Credentials are often a proxy for years of experience. In a field filled with charlatans, though, sometimes they really are a better indicator than "experience", which is why there are laws about practicing medicine without a license. 20 years of experience as a quack doesn't tell you much about how to actually treat disease.
I think a good analogy is non-consensus business ideas. Established consensus views(s) represent the accumulated knowledge in society. The occasional non-consensus but correct idea that emerges is normally due to specialized asymmetrical information; the emphasis of the "what secrets do you know that few people believe" startup question. High IQ and/or exceptional skills sometimes helps but is not key.
I thought the poll was about what we think is the strongest Bayesian evidence, not about what we'd personally be likely to find persuasive. It seems like you meant to ask about the latter?
Definitely I'm in practice persuaded by things like IQ, credentials, etc. It's just that I don't think I'm right to be so persuaded by such things, so I'm motivated to change.
People have an incentive to agree with high-status individuals (credentials), and to loyalty signal (agree on related topics). Further, they are incentivised to deny that, and claim that their beliefs are pure. So yes, a vignette survey might give very different results, even among your twitter followers.
Willing to bet on position seems overrated here. That's part of our local culture (so it's not surprising that your Twitter followers like it) but I don't expect it to be that diagnostic more broadly. And even within our local culture, it seems like there are large individual differences in willingness to make bets that aren't that closely related to having accurate views.
It's interesting to see the spread between School/professional credentials, Years of related experience, and “Skin in game” interest in topic. Those seem fairly similar to me as establishing a background level of competence. There are some sorts of disagreements where the person with one of those kinds of authority is usually right, but those are disproportionately the uninteresting disagreements. So I guess their informativeness depends on what pool the disagreement is drawn from. And other indicators like Score on quiz of related facts will provide much of the same information.
Agrees with you on related topics is in my top 4, and is the lowest ranked of my top 4 on this list. Though this one depends a lot on how it's interpreted/implemented. I interpreted it as being similar to Score on quiz of related facts, where I get the advantage of choosing what's on the "quiz" but also the disadvantage of having to grade it myself without an answer key. If I got to read things that each of the two people had written about topics where I felt that I was well-informed enough to evaluate their views, then I'd put a lot of weight on which person seemed to have views closer to the ones that I think are right. Whereas if a judge talked with me for an hour to get my views on related topics, and then reported that on the whole Alice agrees with me more than Bob does, then I wouldn't put much weight on that.
I feel like I make a lot of headway in evaluating the reliability of an argument by seeing if the author shares the source/process behind their conclusions, makes the structure of the argument clear, gives enough detail in the relevant places, doesn't rely on sketchy rhetorical tricks, and follows other good epistemic practices. Although that kind of argument evaluation is prominent in part because that's where my explicit attention is as I'm engaging with arguments; I'm not sure how informative it is compared to other factors. And it seems hard to distill it into a scorable indicator. Talks in logical/rational vs other style comes closest out of these 16, and made my top 4 when I initially read through the list. And actually Score on Turing test to explain rival arguments is also essentially part of this, since in practice a person demonstrates their understanding of the other person's arguments during the course of the argument rather than through a separate test.
I'm unsure about the direction of some of these indicator. What age is best? Is it a good sign or a bad sign if someone impugns the motives of their rival? Is complexity better than simplicity?
More likely, poll respondents just aren’t being honest about, or just don’t know, what actually persuades them.The poll didn't ask people what actually persuades them. It asked them which 4 clues they wanted to see in order to have the best chance of guessing who is closer to the truth.
There are a few reasons I wouldn't make the jump from survey results -> we should optimize for different things than we currently do if we want to persuade people of things. For one your sample is likely highly unrepresentative, as Ben mentioned in his comment. I'm also skeptical of how far people's self-assessment about what they find persuasive matches up to the reality of what actually persuades them. I don't think this gap would be down to lying (at least not in an anonomous poll) as much as down to people liking to believe that they're more rational than they in fact are. For instance, one thing missing from your questions is tone of voice. In my experience people will be far, far more receptive to arguments delivered in a deeper tone of voice at a slower tempo than oncs delivered rapidly and in a high pitched voice. Ditto for a whole host of other factors.
More likely, poll respondents just aren’t being honest about, or just don’t know, what actually persuades them
I feel confused that you don't mention that your sample is selected from those who follow you on twitter and waded through your long list of poll questions, which seems to me like a much more obvious source of bias in the responses that's likely to explain away the disconnect you're pointing at.
In my experience, rationalists in fact are persuaded more by the things your respondents think they're persuaded by, but of course rationalists are a tiny fraction of the population and so most speakers wouldn't benefit by catering to them.
Yeah, yeah, I know :)
I'd wager you have at least a few Talebian types amongst your followers who are allergic to the notion of IQ. If you'd had intelligence or intellect instead perhaps it would have scored higher as a category?
With regard to the question of why, in the initial poll, IQ and credentials are so low, especially relative to conversation style: Suppose they are seemingly in conflict. If a person of high IQ is nevertheless failing to make a good case for their point of view, and/or clearly knows little about the matter, it seems rational to suspect that, no matter how likely they are to be right generally, they may be mistaken about the specific matter at hand.
Maybe, most of the time, high IQ or credentials go hand-in-hand with informed and well-argued propositions, but in those cases, one does not have to choose between mixed signals.
Also, of course, rationally one should ask if one's own assessment of the quality of their argument or knowledge is accurate, but questions like those of this poll do not allow for such nuances. Interestingly, even over a matter that you do not personally know much about, you can often get a shrewd (though fallible) idea of whether another person does, by the way they speak of it.
You should be aware that Drs. Dunning and Kruger are sneaking up on you with a big net.
If your poll respondents were selected from among those who follow you (here on your blog or anywhere else), I'd bet good money [#1 indicator that I'm worth listening to ;) ] that selection bias outweighs all other statistical factors that could possibly be used to evaluate the data.
Put another way, there's likely very little resemblance between what the average person thinks is persuasive and what the average reader of your selected topics think are persuasive. Of course the term "average person" is ambiguous, but at the moment I can't think of any definition of the term for which that sentence wouldn't still be true.
That said, even among the most rational people (i.e. your survey respondents), what people believe is persuasive to them and what's actually persuasive are often quite different. Because persuasion is about emotion, not logic, and confirmation bias will prompt even the most rational people to rationalize in order to make the supposed logic conform to the actual emotional response.
So, I expect that if you change your survey to use vignettes, you'll get a little closer to what might be considered normal. But I still doubt you'll get anything truly useful until you specifically define what population you'd like to learn more about, and then figure out a way to get a statistically representative sample of that population.
Hmm. Yes. We need separate out "what are the good heuristics for rationalists" from "what do average people find persuasive".
Wow! Useful!
I had a closely related set of thoughts a couple of years back which I condensed into a piece "How to tell who is right without argument". It was a list of heuristics like this: I recognise many of my items (Turing test, Rational style, Hearing rival arguments....etc) and quite a few new ones I didn't think of (mostly the psychometric scores and willingness to bet - I deliberately avoided credentials).
I'm pleasantly surprised to find my inclinations seem to be reflected in this survey, so I think Robin may be a bit harsh here; I would genuinely use these as heuristics if I didn't have time to dig through the arguments in detail.
And why is this so different? It actually isn't. You are just being fooled because these are identifiers rather than distinguishers. Who hasn't heard of someone with credentials spouting nonsense, but how many of them are employed in field, academic or otherwise, how many have many years experience in it, how many are highly ranked by their colleagues, and how many have earned awards and prizes? It isn't credentials that are important except as proxies for these and their absence as suspicious. Positions are proxies for experience and predominate. Status itself is such a proxy. Talk style is dependent on their capability to persuade and our ability to understand or be persuaded when we can't. It is more a matter of experience discussing it though can go awry when being conned or conning ourselves when it is something we want to believe, or knowledgeable but dealing with disparate audiences. A larger problem is many don't care who is right, but who is right for them, wanting what they want. All manner of sins are excusable then.
Credentials are often a proxy for years of experience. In a field filled with charlatans, though, sometimes they really are a better indicator than "experience", which is why there are laws about practicing medicine without a license. 20 years of experience as a quack doesn't tell you much about how to actually treat disease.
I think a good analogy is non-consensus business ideas. Established consensus views(s) represent the accumulated knowledge in society. The occasional non-consensus but correct idea that emerges is normally due to specialized asymmetrical information; the emphasis of the "what secrets do you know that few people believe" startup question. High IQ and/or exceptional skills sometimes helps but is not key.
I thought the poll was about what we think is the strongest Bayesian evidence, not about what we'd personally be likely to find persuasive. It seems like you meant to ask about the latter?
Definitely I'm in practice persuaded by things like IQ, credentials, etc. It's just that I don't think I'm right to be so persuaded by such things, so I'm motivated to change.
People have an incentive to agree with high-status individuals (credentials), and to loyalty signal (agree on related topics). Further, they are incentivised to deny that, and claim that their beliefs are pure. So yes, a vignette survey might give very different results, even among your twitter followers.
Willing to bet on position seems overrated here. That's part of our local culture (so it's not surprising that your Twitter followers like it) but I don't expect it to be that diagnostic more broadly. And even within our local culture, it seems like there are large individual differences in willingness to make bets that aren't that closely related to having accurate views.
It's interesting to see the spread between School/professional credentials, Years of related experience, and “Skin in game” interest in topic. Those seem fairly similar to me as establishing a background level of competence. There are some sorts of disagreements where the person with one of those kinds of authority is usually right, but those are disproportionately the uninteresting disagreements. So I guess their informativeness depends on what pool the disagreement is drawn from. And other indicators like Score on quiz of related facts will provide much of the same information.
Agrees with you on related topics is in my top 4, and is the lowest ranked of my top 4 on this list. Though this one depends a lot on how it's interpreted/implemented. I interpreted it as being similar to Score on quiz of related facts, where I get the advantage of choosing what's on the "quiz" but also the disadvantage of having to grade it myself without an answer key. If I got to read things that each of the two people had written about topics where I felt that I was well-informed enough to evaluate their views, then I'd put a lot of weight on which person seemed to have views closer to the ones that I think are right. Whereas if a judge talked with me for an hour to get my views on related topics, and then reported that on the whole Alice agrees with me more than Bob does, then I wouldn't put much weight on that.
I feel like I make a lot of headway in evaluating the reliability of an argument by seeing if the author shares the source/process behind their conclusions, makes the structure of the argument clear, gives enough detail in the relevant places, doesn't rely on sketchy rhetorical tricks, and follows other good epistemic practices. Although that kind of argument evaluation is prominent in part because that's where my explicit attention is as I'm engaging with arguments; I'm not sure how informative it is compared to other factors. And it seems hard to distill it into a scorable indicator. Talks in logical/rational vs other style comes closest out of these 16, and made my top 4 when I initially read through the list. And actually Score on Turing test to explain rival arguments is also essentially part of this, since in practice a person demonstrates their understanding of the other person's arguments during the course of the argument rather than through a separate test.
I'm unsure about the direction of some of these indicator. What age is best? Is it a good sign or a bad sign if someone impugns the motives of their rival? Is complexity better than simplicity?
More likely, poll respondents just aren’t being honest about, or just don’t know, what actually persuades them.The poll didn't ask people what actually persuades them. It asked them which 4 clues they wanted to see in order to have the best chance of guessing who is closer to the truth.
There are a few reasons I wouldn't make the jump from survey results -> we should optimize for different things than we currently do if we want to persuade people of things. For one your sample is likely highly unrepresentative, as Ben mentioned in his comment. I'm also skeptical of how far people's self-assessment about what they find persuasive matches up to the reality of what actually persuades them. I don't think this gap would be down to lying (at least not in an anonomous poll) as much as down to people liking to believe that they're more rational than they in fact are. For instance, one thing missing from your questions is tone of voice. In my experience people will be far, far more receptive to arguments delivered in a deeper tone of voice at a slower tempo than oncs delivered rapidly and in a high pitched voice. Ditto for a whole host of other factors.
More likely, poll respondents just aren’t being honest about, or just don’t know, what actually persuades them
I feel confused that you don't mention that your sample is selected from those who follow you on twitter and waded through your long list of poll questions, which seems to me like a much more obvious source of bias in the responses that's likely to explain away the disconnect you're pointing at.
In my experience, rationalists in fact are persuaded more by the things your respondents think they're persuaded by, but of course rationalists are a tiny fraction of the population and so most speakers wouldn't benefit by catering to them.