Image via Wikipedia California Attorney General Edmund G. Brown Jr. today announced a $7.5million settlement with financial-services firm Edward Jones Co. for the company's...
New York is among the states that have wisely enacted payee-notification; wonder whether that might be a reason this $300,000 Long Island defalcation was caught quickly [NYLJ] Big mess ensues after Corpus Christi political kingmaker and injury lawyer M...
I wonder what kind of effect relationships have on conflicts of interest. I would intuitively imagine that in a business where relationships are built and valued, conflicts of interest are reduced. But I also would have intuitively imagined that disclosing your conflict of interest would reduce the impact of that conflict (both by increasing the honesty of the advisor and increasing the awareness that estimators have of potential problems). For all I know, advisors might build relationships only to exploit them for greater future gains.
An interesting follow-up to the Cain, Lowenstein, and Moore study would be to increase the number of feedback rounds from 3 (rounds 4-6) to maybe 30 feedback rounds. This simulates relationships. It would also be interesting to allow estimators to choose which advisor they will "purchase" estimates from. While advisors may retain their conflict of interest, they will also be in competition with other advisors to attract estimators as clients. This better resembles how the real world works in many conflict of interest situations.
Two forms of the experiment could explore how estimators react when the quality of advisor's recommendations are spread through word of mouth, or through a more transparent system of reporting. It would also be interesting to explore the system dynamics as the number of feedback rounds vary. What might happen if the experiment participants are aware that the game will be played for a certain number of predefined rounds, or if they are told that the game will continue for an unknown amount of time? How will the final round differ from the first few?
There are certain industries, typically with very concentrated expertise such as IT, where the claims of vendors are given a remarkable degree of credibility by buyers. I have wondered for years why that's the case. It's very exciting to see someone looking into it from an academic perspective.
So says my most distinguished co-author, Richard Feynman.
Feynman being an expert, he had the luxury of not trusting them... Most of the examples of not trusting experts in his book come from moments when he, specifically, knew better. Great book, though.
A collection of reminiscences from taped interviews with fellow scientist and friend Ralph Leighton. ISBN 0393316041
"I'll never make that mistake again, reading the experts' opinions. Of course, you only live one life, and you make all your mistakes, and learn what not to do, and that's the end of you."
So says my most distinguished co-author, Richard Feynman.
Sometimes it isn't hindsight bias. Sometimes it really isn't a surprising result. People aren't stupid. If they have their eyes open, then it stands to reason that a lot of scientific research is going to end up confirming their beliefs. Some, of course, won't, but a lot will.
"If these experts say their research is surprising"
I think you're misreading what they wrote, and so is HA. "Expected" probably doesn't refer to what people in general would or would not expect. These scientists did not as far as I know precede their experiment by a second bit of research in which they surveyed what the general population would have expected as the outcome of the experiment. Statements about what was "expected" likely refers to what the scientists hypothesized would happen. Scientists commonly precede an experiment by stating a hypothesis about how the experiment will turn out. That seems a likely candidate for what "expected" refers to.
Here's an example (with a horrendous background - I am picking it because it was literally the first page I clicked on):
I googled "experimental procedure", and I got that page. Notice that the page includes a statement of hypothesis, and notice that the statement of hypothesis starts as follows:
" Hypothesis: We expected to find the fractal dimension of a ball of any material, created by crumpling [...]"
Notice that the word "expected" crops up in the hypothesis. This confirms my idea that when scientists talk about what was expected, they are likely talking about their own hypothesis. And a statement of one's hypothesis should not be read as an assertion about what the general population would be surprised by.
HA, scientists have a motive to exaggerate the surprisingness of their findings, but we also know that people systematically underestimate the surprisingness of research because of hindsight bias. If people had over-discounted expert advice, might you have said, "Well, we all know about the anti-intellectual or anti-expert bias; we see it society all the time"?
If these experts say their research is surprising, I tend to trust them, despite the conflict of interest...
HA, scientists have a motive to exaggerate the surprisingness of their findings, but we also know that people systematically underestimate the surprisingness of research because of hindsight bias<la>. If people had over-discounted expert advice, might you have said, "Well, we all know about the anti-intellectual or anti-expert bias; we see it society all the time"?If these experts say their research is surprising, I tend to trust them, despite the conflict of interest...
"But coming clean didn't have the expected result." It really didn't have the 'expected' result? The experimenters really thought that non-experts would perfectly discount (or overdiscount) expert bias? This seems to be written in the narrative of 'this result is even more impressive because it's counterintuitive'. I think it's an interesting empirical result, but it seems to match intuition well -which is that not only are lay people not experts on a given topic, but they're not expert discounters of expert bias. Why would we be? That's one reason there's generally a "professional responsibility" element to expert certifications, licensings, and reviews.
Experts and professions are expected to self-monitor themselves and their organizations for bias. These studies indicate that disclosure of bias often may not be enough, sometimes some other measure like recusal may be necessary to protect non-expert interest. It's an idea expert licensing bodies already incorporate, although perhaps not to the degree they should to protect lay interests.
if trustworthiness is a trait, and those who are more trustworthy are more likely to disclose conflicts of interest, doesn't that suggest that we should trust those who disclose conflicts of interest more than we trust those who don't? the experiment seems to rely on a ceteris paribus condition that we wouldn't necessarily expect to hold in the real world.
(i'm aware that such an outcome wouldn't be a stable equilibrium in a signalling model, but i'm still inclined towards believing the signal.)
Interesting line in the article: "We're so used to the idea that the world is governed by personal relations, that if the person is in front of us they should be on our side".
Would people's opinions of experts be lower when they don't actually see the expert, but only read a press report or read a book? If a reknown economist proclaims "we're heading into a recession" while your personal bank manager informs you "the economic perspectives look rosy", who is believed?
Edward Jones & Co Settlement
Image via Wikipedia California Attorney General Edmund G. Brown Jr. today announced a $7.5million settlement with financial-services firm Edward Jones Co. for the company's...
Around the web, January 2
New York is among the states that have wisely enacted payee-notification; wonder whether that might be a reason this $300,000 Long Island defalcation was caught quickly [NYLJ] Big mess ensues after Corpus Christi political kingmaker and injury lawyer M...
I wonder what kind of effect relationships have on conflicts of interest. I would intuitively imagine that in a business where relationships are built and valued, conflicts of interest are reduced. But I also would have intuitively imagined that disclosing your conflict of interest would reduce the impact of that conflict (both by increasing the honesty of the advisor and increasing the awareness that estimators have of potential problems). For all I know, advisors might build relationships only to exploit them for greater future gains.
An interesting follow-up to the Cain, Lowenstein, and Moore study would be to increase the number of feedback rounds from 3 (rounds 4-6) to maybe 30 feedback rounds. This simulates relationships. It would also be interesting to allow estimators to choose which advisor they will "purchase" estimates from. While advisors may retain their conflict of interest, they will also be in competition with other advisors to attract estimators as clients. This better resembles how the real world works in many conflict of interest situations.
Two forms of the experiment could explore how estimators react when the quality of advisor's recommendations are spread through word of mouth, or through a more transparent system of reporting. It would also be interesting to explore the system dynamics as the number of feedback rounds vary. What might happen if the experiment participants are aware that the game will be played for a certain number of predefined rounds, or if they are told that the game will continue for an unknown amount of time? How will the final round differ from the first few?
Does anyone know about a relationship bias?
Strategic Responses to Disclosure Information
One of the major themes of my writings on disclosure laws is an example of the law of unexpected consequences: more disclosure law produces less...
There are certain industries, typically with very concentrated expertise such as IT, where the claims of vendors are given a remarkable degree of credibility by buyers. I have wondered for years why that's the case. It's very exciting to see someone looking into it from an academic perspective.
So says my most distinguished co-author, Richard Feynman.
Feynman being an expert, he had the luxury of not trusting them... Most of the examples of not trusting experts in his book come from moments when he, specifically, knew better. Great book, though.
Surely You're Joking, Mr. Feynman! (1985)
A collection of reminiscences from taped interviews with fellow scientist and friend Ralph Leighton. ISBN 0393316041
"I'll never make that mistake again, reading the experts' opinions. Of course, you only live one life, and you make all your mistakes, and learn what not to do, and that's the end of you."
So says my most distinguished co-author, Richard Feynman.
Sometimes it isn't hindsight bias. Sometimes it really isn't a surprising result. People aren't stupid. If they have their eyes open, then it stands to reason that a lot of scientific research is going to end up confirming their beliefs. Some, of course, won't, but a lot will.
"If these experts say their research is surprising"
I think you're misreading what they wrote, and so is HA. "Expected" probably doesn't refer to what people in general would or would not expect. These scientists did not as far as I know precede their experiment by a second bit of research in which they surveyed what the general population would have expected as the outcome of the experiment. Statements about what was "expected" likely refers to what the scientists hypothesized would happen. Scientists commonly precede an experiment by stating a hypothesis about how the experiment will turn out. That seems a likely candidate for what "expected" refers to.
Here's an example (with a horrendous background - I am picking it because it was literally the first page I clicked on):
http://www.uow.edu.au/eng/p...
I googled "experimental procedure", and I got that page. Notice that the page includes a statement of hypothesis, and notice that the statement of hypothesis starts as follows:
" Hypothesis: We expected to find the fractal dimension of a ball of any material, created by crumpling [...]"
Notice that the word "expected" crops up in the hypothesis. This confirms my idea that when scientists talk about what was expected, they are likely talking about their own hypothesis. And a statement of one's hypothesis should not be read as an assertion about what the general population would be surprised by.
HA, scientists have a motive to exaggerate the surprisingness of their findings, but we also know that people systematically underestimate the surprisingness of research because of hindsight bias. If people had over-discounted expert advice, might you have said, "Well, we all know about the anti-intellectual or anti-expert bias; we see it society all the time"?
If these experts say their research is surprising, I tend to trust them, despite the conflict of interest...
HA, scientists have a motive to exaggerate the surprisingness of their findings, but we also know that people systematically underestimate the surprisingness of research because of hindsight bias<la>. If people had over-discounted expert advice, might you have said, "Well, we all know about the anti-intellectual or anti-expert bias; we see it society all the time"?If these experts say their research is surprising, I tend to trust them, despite the conflict of interest...
The paper's available for free here.
"But coming clean didn't have the expected result." It really didn't have the 'expected' result? The experimenters really thought that non-experts would perfectly discount (or overdiscount) expert bias? This seems to be written in the narrative of 'this result is even more impressive because it's counterintuitive'. I think it's an interesting empirical result, but it seems to match intuition well -which is that not only are lay people not experts on a given topic, but they're not expert discounters of expert bias. Why would we be? That's one reason there's generally a "professional responsibility" element to expert certifications, licensings, and reviews.
Experts and professions are expected to self-monitor themselves and their organizations for bias. These studies indicate that disclosure of bias often may not be enough, sometimes some other measure like recusal may be necessary to protect non-expert interest. It's an idea expert licensing bodies already incorporate, although perhaps not to the degree they should to protect lay interests.
if trustworthiness is a trait, and those who are more trustworthy are more likely to disclose conflicts of interest, doesn't that suggest that we should trust those who disclose conflicts of interest more than we trust those who don't? the experiment seems to rely on a ceteris paribus condition that we wouldn't necessarily expect to hold in the real world.
(i'm aware that such an outcome wouldn't be a stable equilibrium in a signalling model, but i'm still inclined towards believing the signal.)
Interesting line in the article: "We're so used to the idea that the world is governed by personal relations, that if the person is in front of us they should be on our side".
Would people's opinions of experts be lower when they don't actually see the expert, but only read a press report or read a book? If a reknown economist proclaims "we're heading into a recession" while your personal bank manager informs you "the economic perspectives look rosy", who is believed?