26 Comments

This discussion seems to confuse actual values and their names.

Many people have strong emotional reaction to certain names of values. Humans seem to confuse map with the territory more often we are in excited mood.

I think most disagreements could be avoided by just avoiding the common names for values. This can be done by being more explicit. The names do not help anyway, because everybody understands these emotional value words differently anyway.

To me it seems that the cliques are based on the differences in the interpretations of these value words.

Expand full comment

Robin: In all your examples the subgroups can violate the values of general society in their focus on the agreed upon value.

Perhaps 'collecting the most people serious about this goal' should rank lower as a goal of the site than making sure rationalists have a better reputation than salespeople and lawyers.

Expand full comment

I suspect this blog has reached far more people than the ideal you imagine, and even with its idiosyncratic obsessions about medicine and magical computers, it has likely made as much progress as such an endeavor could. (And I love some of that stuff.)

Before you make good on your promise to take the ball and go home, you might consider an EY style series/intuitive write-up on Bayesian disagreement.

And dammit, get Cowen on that Bloggingheads.

Expand full comment

Hmm. I do see a recent shift on OB to talking a lot about values, but this shift seems to me to post-date a decline in OB's quality rather than preceding it. Could it be that when people need to fill a hole in the conversation while thinking of something to say they bring in values?

Also, I don't think that we have done poorly. OB has become a very popular blog, attracted a very capable community, and effectively disseminated some important ideas.

To me, the most important failures in OB come from not even attempting what seems to me to be the most urgent issue, construction of a convincing argument for someone who doesn't already pursue truth that thy would be better off by their own values if they did so.

Expand full comment

Robin, I agree with the amended post.

Expand full comment

All, see my added to the post.

Eliezer, even if many disagreements are secretly about values, if arguing basic values is expensive to community conversation health, that suggests we have a limited budget of disagreement causes we can expose. Spend your budget carefully.

Soulless, if no-nonsense styles are just as easily used for propaganda as other styles, then why do so many authors in so many fields explicitly say that they use it to show they are avoiding propaganda, with most of their readers believing they in fact do more avoid propaganda? Are these readers all just mistaken?

Expand full comment

This is one of the great intractable problems. You cannot simply say "we are all going to agree on a set of shared values, because only once we make that convention will we be able to move forward" without actually having a candidate set of values that everyone can agree on.

There is an exactly analogous problem in AI research. AI is a pre-paradigm field: there is no shared commitment, few common terms, little agreement on what the real problems are. This is evident by looking at the standard text (Russell and Norvig) and observing that the chapters don't really have anything to do with one another. AI is a just a grab-bag of ideas that people thought had something to do with intelligence.

This, incidentally, is why I consider Rodney Brooks to be one of the great philosophers of AI (contra Eliezer). Not because of his technical ideas, but because he proposed a paradigm within which AI could move forward. The paradigm is: build robots, put them in the real world, observe the problems they encounter, and then solve those problems. Now, this paradigm has some deficiencies, but it is at least articulable. It at least provides a reality-driven principle for guiding research. Researchers cannot dream up logical puzzles, then create systems that solve those puzzles, and claim they have solved an important problem in AI - Brooks called this "puzzlitis".

Expand full comment

I can somewhat see the point, but I think ignoring big, important issues tied in with bias is a cop-out. Sure, a forum where everyone agreed they were just trying to reason logically might make quicker progress, but towards a much narrower and less important goal.

If a whole bunch of really smart people focus on that, you might get something like Arnold Kling's depiction of modern macroeconomics - castles of mathematical models completely divorced from reality. Or Peter Thiel's depiction of statistical arbitrage - fancy computer algorithms for predicting how stocks will move next week with no understanding at all of how the global financial system works.

I think we scientists have an attraction to simple concepts like economic efficiency, and fields like mathematics, because we love optimizing based on clear rules. This is why I like games. But the big problems in life are messy and foundational. To me, avoiding them is sticking your head in the sand.

And anyway, most groups working on a problem have a goal in mind. Shared assumptions are useful inasmuch as they help achieve shared goals. To throw any discussion out of shared assumptions out of the window risks having bad assumptions get calcified.

Expand full comment

My own view leans more toward "Many disagreements are secretly about values; expose the values to isolate the causes of the disagreement."

Expand full comment

Robin, of course it doesn't make them easier to discuss, but how is that a reason not to discuss them? What makes such values less of a bias to be overcome than any other? Especially in cases where values are in conflict, an inability to see clearly the implications can easily lead to results suboptimal to both values.

Such topics should be avoided only insofar as the people involved are incapable of discussing them dispassionately.

And what I meant was that apparently serious, no-nonsense styles can be just as readily used to disguise non-informational intent as more flashy styles can, by couching shaky argument in a manner that sounds professional. e.g., use of obfuscating jargon or mathematical reasoning not anchored to concrete measurement. While you personally may be comfortable with formal style, and do communicate well that way, you seem to reference the benefits of such a style in a manner that indicates a lack of awareness that some people will interpret use of such a style as indicative of deception, not clarity.

Expand full comment

Maybe someone should start a philosophy of AI blog, and then this blog could become a place for talking about how to seek truth despite the limitations of human psychology.

He does have a point . . . .

Expand full comment

Actually, I agree that arguing about basic values can be a serious distraction if you're trying to get something done-- it's just that the one sentence caught my eye.

Expand full comment

Mathematics can be pursued for the beauty of it, but it also has practical applications.

My question overdid it a little, but if professional ethicists don't discuss crucial areas of human behavior, isn't that abandoning a large part of their venu?

Expand full comment

soulless, yes "basic values" are often held uncritically and are inconsistent with ordinary goals; that doesn't make them easier to discuss though. Not sure what you mean by a "lack of substance."

Nancy, you lost me.

Expand full comment

Aaron, cf. "...And Say No More of It."

Expand full comment

Maybe the path we went down, or are going down about 80% of the time, is the "problems and paradoxes of building a friendly AI" path, which is what drove me away despite my strong interest in overcoming bias. Maybe someone should start a philosophy of AI blog, and then this blog could become a place for talking about how to seek truth despite the limitations of human psychology.

Expand full comment