Beware Value Talk

Last week I wrote:

[For] lawyers supporting clients, engineers presenting designs, accountants presenting financial accounts, or academics presenting analyses, styles are more "no-nonsense."  They avoid colorful flashy emotional visual aids and music, use precise concise technical and unemotional language, make structured and standardized arguments, explicitly summarize and address opposing views, make methods and premises explicit, and warn early of conclusions and structures. … Authors who want to be seen as minimizing the propaganda element of their communications avoid using flashy styles. … [In contrast,] in such propaganda contexts, impressive charismatic leaders tend to speak in simple repetitive eloquent poetic vague emotional language, often with rambling structures, engaging stories, vivid colorful flashy emotional music and visual aids.

Communities of conversation "serious" about working together to make progress in understanding things tend not only to follow the above style conventions, they also tend to follow a key content rule:  avoid arguing basic values

Communities that mostly agree on how to evaluate claims can make a lot of progress, and such agreement comes naturally enough when discussion is restricted to "facts" connected to frequent observations and actions.  Such communities can also discuss how to achieve a few commonly accepted values. 

For example, athletes can talk about how to win games, lawyers about winning cases, salesmen about increasing sales, business managers about increasing profits, scientists about accelerating scientific progress, engineers about improving design efficiencies, medical experts about increasing health while decreasing costs, and economists about increasing policy "economic-efficiency." 

In all of these cases, explicit agreement on simply-expressed values allows group conversations to progress effectively toward achieving such values.  Members can specialize, develop and use specialized language and techniques, and evaluate others' contributions to the common cause.  In contrast, groups that freely argue about basic values tend to fragment into like-thinking-cliques focused more on clique loyalty than on fairly crediting informative contributions.

Yes, agreed-on group values are usually simplified relative to the diverse complex values of group members; one can often identify cases where most group members do not prefer the choices selected by their explicitly agreed values.  And no doubt more discussion of basic values more can sometimes improve important decisions relative to member values.  However, arguing basic values often imposes large costs; such discussions threaten social norms that preserve group cohesion and effective conversation.  Even professional ethicists usually avoid discussing political or religious issues. 

I created Overcoming Bias hoping to jump-start a community with the common purpose of overcoming biases that keep them from seeing clearly.  For the best chance at collecting the most people serious about this goal, I expect we should have followed these standard conventions, maintaining a serious style and avoiding arguing basic values, especially unstructured and difficult to resolve value questions.  Alas, we took another path.

Added 17Feb: I am not saying that one should never mention basic values; I'm saying one should be aware of the large costs of arguing about them.  Show others in your community that you are aware of such dangers, and tell them why you expect each value discussion you initiate is worth such costs.  This contrasts to an attitude of eagerly seeking out basic value topics to argue, even when they have little connection to important decisions the community must make.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://macroethics.blogspot.com nazgulnarsil

    you’re already talking core values when you talk about cognitive bias. a lot of people don’t believe in positivism, materialism, reductionism, evepsych, behavioralism, etc. (whatever scale/aspect you want to examine).

  • http://www.itaintthemustard.blogspot.com rhbee

    I believe the point is that agreement on those basic ideas does allow people who are on the same page to move forward. But how is this different from agreeing to disagree? A point of view that I believe stops all progress in a real discussion.

    For example, when you say “groups that freely argue about basic values tend to fragment into like-thinking-cliques focused more on clique loyalty than on fairly crediting informative contributions.” you are saying that these groups won’t be united by any superordinate goals. Freely arguing out ideas is where the really good ideas come from. Accepting that others have the same values as yourself does allow you however to avoid the waste of time that speaking to the choir yields. Which seems to me the exact opposite.

  • http://amckenz.googlepages.com Andy McKenzie

    Your argument reminds me of Kuhn in The Structure of Scientific Revolutions. Because of a paradigm, researchers are able to have empirically-based discussions that moves science forward.

    I’m not sure why you are so skeptical about the potential of OB. Since I assume we are both fairly Bayesian, I must conclude that you have access to inside information. Too bad.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    What other issues that are difficult to resolve should serious communities avoid discussing? Should a community dedicated to overcoming biases avoid discussing religion? Politics? Current affairs in politics as opposed to political history?

  • Z. M. Davis

    “I created Overcoming Bias hoping to jump-start a community with the common purpose of overcoming biases that keep them from seeing clearly. For the best chance at collecting the most people serious about this goal […] Alas, we took another path.”

    Oh, dear. Are we really doing that badly?

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    rhbee, the “serious” groups I listed all make real progress.

    Andy, yes, the analogy to Kuhnian paradigms is apt.

    Eliezer, it is a tradeoff between topic importance and the risk that discussing it poses to community cohesion. Yes, discussing current politics strains us more, especially when framed as basic values or ideology. Other framings may be less dangerous.

    Z.M. and Andy, I didn’t say we are doing badly.

  • John Maxwell

    Alas, we took another path.

    I think you can blame blog contributors for this. How much of what is written here actually deals with bias?

    My guess is that there simply isn’t that much interesting to say about bias. (Although I’ve got some ideas that I plan to write about when Less Wrong comes out.)

  • http://www.nancybuttons.com Nancy Lebovitz

    Even professional ethicists usually avoid discussing political or religious issues.

    Isn’t that like trying to have mathematics without ever letting engineering happen?

  • http://profile.typekey.com/SoullessAutomaton/ a soulless automaton

    Are basic values really basic? How well do people’s basic values actually correspond to their high-level preferences, and would pointing out any inconsistencies change the individual’s high-level preference, or would it cause them to re-evaluate the so-called “basic values”?

    I am unconvinced that the problem typically lies in the topics of discussion; suspecting rather that it is in the way in which the values are held by the individuals, typically somewhat uncritically and as axiomatic despite possible inconsistencies with the goals desired. That is, “basic values” are often just a euphemism for fairly arbitrary and emotionally-driven biases.

    Also, Robin, in my experience a “serious” style is just as often a signal of trying to behave seriously to disguise a lack of substance as it is an attempt to communicate effectively. Don’t assume that because you’re more comfortable with a given style that it sends only the messages you think it sends, or that it’s required for the type of discussion you wish to have.

  • Adam

    Nancy Lebovitz:
    “Isn’t that like trying to have mathematics without ever letting engineering happen?”

    Robin doesn’t seem to be saying anything that extravagant. He seems to be saying that valuable development of opinion is more likely to take place if we avoid certain subjects.

    Regarding ethics, development of our agreement on ethical principals is more likely to take place if we avoid topics that are likely to arouse controversy so if your aim is to develop these principals then avoiding controversy is a good idea.

    That’s not to say that other people in other contexts can’t then apply these to practical situations.

  • http://willwilkinson.net/flybottle Aaron Brewer

    Maybe the path we went down, or are going down about 80% of the time, is the “problems and paradoxes of building a friendly AI” path, which is what drove me away despite my strong interest in overcoming bias. Maybe someone should start a philosophy of AI blog, and then this blog could become a place for talking about how to seek truth despite the limitations of human psychology.

  • Z. M. Davis

    Aaron, cf. “…And Say No More of It.”

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    soulless, yes “basic values” are often held uncritically and are inconsistent with ordinary goals; that doesn’t make them easier to discuss though. Not sure what you mean by a “lack of substance.”

    Nancy, you lost me.

  • http://www.nancybuttons.com Nancy Lebovitz

    Mathematics can be pursued for the beauty of it, but it also has practical applications.

    My question overdid it a little, but if professional ethicists don’t discuss crucial areas of human behavior, isn’t that abandoning a large part of their venu?

  • http://www.nancybuttons.com Nancy Lebovitz

    Actually, I agree that arguing about basic values can be a serious distraction if you’re trying to get something done– it’s just that the one sentence caught my eye.

  • http://stuartbuck.blogspot.com Stuart Buck

    Maybe someone should start a philosophy of AI blog, and then this blog could become a place for talking about how to seek truth despite the limitations of human psychology.

    He does have a point . . . .

  • http://profile.typekey.com/SoullessAutomaton/ a soulless automaton

    Robin, of course it doesn’t make them easier to discuss, but how is that a reason not to discuss them? What makes such values less of a bias to be overcome than any other? Especially in cases where values are in conflict, an inability to see clearly the implications can easily lead to results suboptimal to both values.

    Such topics should be avoided only insofar as the people involved are incapable of discussing them dispassionately.

    And what I meant was that apparently serious, no-nonsense styles can be just as readily used to disguise non-informational intent as more flashy styles can, by couching shaky argument in a manner that sounds professional. e.g., use of obfuscating jargon or mathematical reasoning not anchored to concrete measurement. While you personally may be comfortable with formal style, and do communicate well that way, you seem to reference the benefits of such a style in a manner that indicates a lack of awareness that some people will interpret use of such a style as indicative of deception, not clarity.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    My own view leans more toward “Many disagreements are secretly about values; expose the values to isolate the causes of the disagreement.”

  • http://distributedrepublic.net/archives/2009/02/11/evpsych-vs-economics Patri Friedman

    I can somewhat see the point, but I think ignoring big, important issues tied in with bias is a cop-out. Sure, a forum where everyone agreed they were just trying to reason logically might make quicker progress, but towards a much narrower and less important goal.

    If a whole bunch of really smart people focus on that, you might get something like Arnold Kling’s depiction of modern macroeconomics – castles of mathematical models completely divorced from reality. Or Peter Thiel’s depiction of statistical arbitrage – fancy computer algorithms for predicting how stocks will move next week with no understanding at all of how the global financial system works.

    I think we scientists have an attraction to simple concepts like economic efficiency, and fields like mathematics, because we love optimizing based on clear rules. This is why I like games. But the big problems in life are messy and foundational. To me, avoiding them is sticking your head in the sand.

    And anyway, most groups working on a problem have a goal in mind. Shared assumptions are useful inasmuch as they help achieve shared goals. To throw any discussion out of shared assumptions out of the window risks having bad assumptions get calcified.

  • Daniel Burfoot

    This is one of the great intractable problems. You cannot simply say “we are all going to agree on a set of shared values, because only once we make that convention will we be able to move forward” without actually having a candidate set of values that everyone can agree on.

    There is an exactly analogous problem in AI research. AI is a pre-paradigm field: there is no shared commitment, few common terms, little agreement on what the real problems are. This is evident by looking at the standard text (Russell and Norvig) and observing that the chapters don’t really have anything to do with one another. AI is a just a grab-bag of ideas that people thought had something to do with intelligence.

    This, incidentally, is why I consider Rodney Brooks to be one of the great philosophers of AI (contra Eliezer). Not because of his technical ideas, but because he proposed a paradigm within which AI could move forward. The paradigm is: build robots, put them in the real world, observe the problems they encounter, and then solve those problems. Now, this paradigm has some deficiencies, but it is at least articulable. It at least provides a reality-driven principle for guiding research. Researchers cannot dream up logical puzzles, then create systems that solve those puzzles, and claim they have solved an important problem in AI – Brooks called this “puzzlitis”.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    All, see my added to the post.

    Eliezer, even if many disagreements are secretly about values, if arguing basic values is expensive to community conversation health, that suggests we have a limited budget of disagreement causes we can expose. Spend your budget carefully.

    Soulless, if no-nonsense styles are just as easily used for propaganda as other styles, then why do so many authors in so many fields explicitly say that they use it to show they are avoiding propaganda, with most of their readers believing they in fact do more avoid propaganda? Are these readers all just mistaken?

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Robin, I agree with the amended post.

  • michael vassar

    Hmm. I do see a recent shift on OB to talking a lot about values, but this shift seems to me to post-date a decline in OB’s quality rather than preceding it. Could it be that when people need to fill a hole in the conversation while thinking of something to say they bring in values?

    Also, I don’t think that we have done poorly. OB has become a very popular blog, attracted a very capable community, and effectively disseminated some important ideas.

    To me, the most important failures in OB come from not even attempting what seems to me to be the most urgent issue, construction of a convincing argument for someone who doesn’t already pursue truth that thy would be better off by their own values if they did so.

  • http://gov.state.ak.us burgerflipper

    I suspect this blog has reached far more people than the ideal you imagine, and even with its idiosyncratic obsessions about medicine and magical computers, it has likely made as much progress as such an endeavor could. (And I love some of that stuff.)

    Before you make good on your promise to take the ball and go home, you might consider an EY style series/intuitive write-up on Bayesian disagreement.

    And dammit, get Cowen on that Bloggingheads.

  • James Andrix

    Robin: In all your examples the subgroups can violate the values of general society in their focus on the agreed upon value.

    Perhaps ‘collecting the most people serious about this goal’ should rank lower as a goal of the site than making sure rationalists have a better reputation than salespeople and lawyers.

  • Mikko

    This discussion seems to confuse actual values and their names.

    Many people have strong emotional reaction to certain names of values. Humans seem to confuse map with the territory more often we are in excited mood.

    I think most disagreements could be avoided by just avoiding the common names for values. This can be done by being more explicit. The names do not help anyway, because everybody understands these emotional value words differently anyway.

    To me it seems that the cliques are based on the differences in the interpretations of these value words.

  • Pingback: Pretendiendo ser sabio | cognonomicon