Category Archives: Disagreement

Roberts’ Bias Therapy

My wife was once a professional therapist, but my first therapist gig was this EconTalk podcast with Russ Roberts, where I help him come to terms with the fact that economists disagree.  In good therapist style, I'm quiet for 28 minutes while Russ agonizes, and then I tell him he has already answered his own question; he just doesn't like the answer. One blogger loves it:

It truly has been a long time since I've seen anything so original and so fascinating. … I can only hope that I can someday be as intellectually curious and honest as Robin.

Congrats to EconTalk for being voted Best Podcast in the 2008 Weblog Awards. 

GD Star Rating
loading...
Tagged as:

Disagreement Is Near-Far Bias

Back in November I read this Science review by Nira Liberman and Yaacov Trope on their awkwardly-named "Construal level theory", and wrote a post I estimated "to be the most dense with useful info on identifying our biases I've ever written":

[NEAR] All of these bring each other more to mind: here, now, me, us; trend-deviating likely real local events; concrete, context-dependent, unstructured, detailed, goal-irrelevant incidental features; feasible safe acts; secondary local concerns; socially close folks with unstable traits. 

[FAR] Conversely, all these bring each other more to mind: there, then, them; trend-following unlikely hypothetical global events; abstract, schematic, context-freer, core, coarse, goal-related features; desirable risk-taking acts, central global symbolic concerns, confident predictions, polarized evaluations, socially distant people with stable traits. 

Since then I've become even more impressed with it, as it explains most biases I know and care about, including muddled thinking about economics and the future.  For example, Ross's famous "fundamental attribution error" is a trivial application. 

The key idea is that when we consider the same thing from near versus far, different features become salient, leading our minds to different conclusions.  This is now my best account of disagreement.  We disagree because we explain our own conclusions via detailed context (e.g., arguments, analysis, and evidence), and others' conclusions via coarse stable traits (e.g., demographics, interests, biases).  While we know abstractly that we also have stable relevant traits, and they have detailed context, we simply assume we have taken that into account, when we have in fact done no such thing. 

For example, imagine I am well-educated and you are not, and I argue for the value of education and you argue against it.  I find it easy to dismiss your view as denigrating something you do not have, but I do not think it plausible I am mainly just celebrating something I do have.  I can see all these detailed reasons for my belief, and I cannot easily see and appreciate your detailed reasons. 

And this is the key error: our minds often assure us that they have taken certain factors into account when they have done no such thing.  I tell myself that of course I realize that I might be biased by my interests; I'm not that stupid.  So I must have already taken that possible bias into account, and so my conclusion must be valid even after correcting for that bias.  But in fact I haven't corrected for it much at all; I've just assumed that I did so.

GD Star Rating
loading...
Tagged as: ,

Disagreeing About Doubt

The movie Doubt, now in theaters, offers an interesting chance for a disagreement case study.  In the movie, Sister Beauvier accuses Father Flynn of a particular act, and viewers wonder: did he actually do it, and was she justified in her response?  My wife and I disagreed quite a lot on Flynn's guilt – she's about at 95% confidence and I'm about at 40%. Apparently other viewers similarly diverge:

Those I spoke to after the movie were quite sure, maybe even certain, that Father Flynn was either guilty or innocent.

So what say the rest of you?  And what is it about this situation that causes so much disagreement anyway?  Don't read comments here unless you don't mind spoilers, which are fair game there.  (If needed, let's ground this in terms of what is reasonable to estimate given everything the screenwriter knows.)

Added: It helps to show a base rate and then corrections for each new factor.  For example, on average 5%  are guilty, and someone with a shameful past is twice as likely to be guilty, for a final estimate of 10%.

GD Star Rating
loading...
Tagged as:

Imaginary Positions

Every now and then, one reads an article about the Singularity in which some reporter confidently asserts, "The Singularitarians, followers of Ray Kurzweil, believe that they will be uploaded into techno-heaven while the unbelievers languish behind or are extinguished by the machines."

I don't think I've ever met a single Singularity fan, Kurzweilian or otherwise, who thinks that only believers in the Singularity will go to upload heaven and everyone else will be left to rot.  Not one.  (There's a very few pseudo-Randian types who believe that only the truly selfish who accumulate lots of money will make it, but they expect e.g. me to be damned with the rest.)

But if you start out thinking that the Singularity is a loony religious meme, then it seems like Singularity believers ought to believe that they alone will be saved.  It seems like a detail that would fit the story.

This fittingness is so strong as to manufacture the conclusion without any particular observations.  And then the conclusion isn't marked as a deduction.  The reporter just thinks that they investigated the Singularity, and found some loony cultists who believe they alone will be saved.

Or so I deduce.  I haven't actually observed the inside of their minds, after all.

Has any rationalist ever advocated behaving as if all people are reasonable and fair?  I've repeatedly heard people say, "Well, it's not always smart to be rational, because other people aren't always reasonable."  What rationalist said they were?  I would deduce:  This is something that non-rationalists believe it would "fit" for us to believe, given our general blind faith in Reason.  And so their minds just add it to the knowledge pool, as though it were an observation.  (In this case I encountered yet another example recently enough to find the reference; see here.)

Continue reading "Imaginary Positions" »

GD Star Rating
loading...

The Mechanics of Disagreement

Two ideal Bayesians cannot have common knowledge of disagreement; this is a theorem.  If two rationalist-wannabes have common knowledge of a disagreement between them, what could be going wrong?

The obvious interpretation of these theorems is that if you know that a cognitive machine is a rational processor of evidence, its beliefs become evidence themselves.

If you design an AI and the AI says "This fair coin came up heads with 80% probability", then you know that the AI has accumulated evidence with an likelihood ratio of 4:1 favoring heads – because the AI only emits that statement under those circumstances.

It’s not a matter of charity; it’s just that this is how you think the other cognitive machine works.

And if you tell an ideal rationalist, "I think this fair coin came up heads with 80% probability", and they reply, "I now think this fair coin came up heads with 25% probability", and your sources of evidence are independent of each other, then you should accept this verdict, reasoning that (before you spoke) the other mind must have encountered evidence with a likelihood of 1:12 favoring tails.

But this assumes that the other mind also thinks that you’re processing evidence correctly, so that, by the time it says "I now think this fair coin came up heads, p=.25", it has already taken into account the full impact of all the evidence you know about, before adding more evidence of its own.

Continue reading "The Mechanics of Disagreement" »

GD Star Rating
loading...

Disjunctions, Antipredictions, Etc.

Followup toUnderconstrained Abstractions

Previously:

So if it’s not as simple as just using the one trick of finding abstractions you can easily verify on available data, what are some other tricks to use?

There are several, as you might expect…

Previously I talked about "permitted possibilities".  There’s a trick in debiasing that has mixed benefits, which is to try and visualize several specific possibilities instead of just one.

The reason it has "mixed benefits" is that being specific, at all, can have biasing effects relative to just imagining a typical case.  (And believe me, if I’d seen the outcome of a hundred planets in roughly our situation, I’d be talking about that instead of all this Weak Inside View stuff.)

But if you’re going to bother visualizing the future, it does seem to help to visualize more than one way it could go, instead of concentrating all your strength into one prediction.

So I try not to ask myself "What will happen?" but rather "Is this possibility allowed to happen, or is it prohibited?"  There are propositions that seem forced to me, but those should be relatively rare – the first thing to understand about the future is that it is hard to predict, and you shouldn’t seem to be getting strong information about most aspects of it.

Continue reading "Disjunctions, Antipredictions, Etc." »

GD Star Rating
loading...

True Sources of Disagreement

Followup toIs That Your True Rejection?

I expected from the beginning, that the difficult part of two rationalists reconciling a persistent disagreement, would be for them to expose the true sources of their beliefs.

One suspects that this will only work if each party takes responsibility for their own end; it’s very hard to see inside someone else’s head.  Yesterday I exhausted myself mentally while out on my daily walk, asking myself the Question "What do you think you know, and why do you think you know it?" with respect to "How much of the AI problem compresses to large insights, and how much of it is unavoidable nitty-gritty?"  Trying to either understand why my brain believed what it believed, or else force my brain to experience enough genuine doubt that I could reconsider the question and arrive at a real justification that way.  It’s hard to see how Robin Hanson could have done any of this work for me.

Presumably a symmetrical fact holds about my lack of access to the real reasons why Robin believes what he believes.  To understand the true source of a disagreement, you have to know why both sides believe what they believe – one reason why disagreements are hard to resolve.

Nonetheless, here’s my guess as to what this Disagreement is about:

Continue reading "True Sources of Disagreement" »

GD Star Rating
loading...

Wrapping Up

This Friendly AI discussion has taken more time than I planned or have.  So let me start to wrap up.

On small scales we humans evolved to cooperate via various pair and group bonding mechanisms.  But these mechanisms aren’t of much use on today’s evolutionarily-unprecedented large scales.  Yet we do in fact cooperate on the largest scales.  We do this because we are risk averse, because our values mainly conflict on resource use which conflicts destroy, and because we have the intelligence and institutions to enforce win-win deals via property rights, etc.

I raise my kids because they share my values.  I teach other kids because I’m paid to.  Folks raise horses because others pay them for horses, expecting horses to cooperate as slaves.  You might expect your pit bulls to cooperate, but we should only let you raise pit bulls if you can pay enough damages if they hurt your neighbors.

In my preferred em (whole brain emulation) scenario, people would only authorize making em copies using borrowed or rented brains/bodies when they expected those copies to have lives worth living.  With property rights enforced, both sides would expect to benefit more when copying was allowed.  Ems would not exterminate humans mainly because that would threaten the institutions ems use to keep peace with each other.

Similarly, we expect AI developers to plan to benefit from AI cooperation, via either direct control, indirect control such as via property rights institutions, or such creatures having cooperative values.  As with pit bulls, developers should have to show an ability, perhaps via insurance, to pay plausible hurt amounts if their creations hurt others.  To the extent they or their insurers fear such hurt, they would test for various hurt scenarios, slowing development as needed in support.  To the extent they feared inequality from some developers succeeding first, they could exchange shares, or share certain kinds of info.  Naturally-occurring info-leaks, and shared sources, both encouraged by shared standards, would limit this inequality.

Continue reading "Wrapping Up" »

GD Star Rating
loading...
Tagged as: , , ,

Is That Your True Rejection?

It happens every now and then, that the one encounters some of my transhumanist-side beliefs – as opposed to my ideas having to do with human rationality – strange, exotic-sounding ideas like superintelligence and Friendly AI.  And the one rejects them.

If the one is called upon to explain the rejection, not uncommonly the one says,

"Why should I believe anything Yudkowsky says?  He doesn’t have a PhD!"

And occasionally someone else, hearing, says, "Oh, you should get a PhD, so that people will listen to you."  Or this advice may even be offered by the same one who disbelieved, saying, "Come back when you have a PhD."

Now there are good and bad reasons to get a PhD, but this is one of the bad ones.

There’s many reasons why someone actually has an adverse reaction to transhumanist theses.  Most are matters of pattern recognition, rather than verbal thought: the thesis matches against "strange weird idea" or "science fiction" or "end-of-the-world cult" or "overenthusiastic youth".

So immediately, at the speed of perception, the idea is rejected.  If, afterward, someone says "Why not?", this lanches a search for justification.  But this search will not necessarily hit on the true reason – by "true reason" I mean not the best reason that could be offered, but rather, whichever causes were decisive as a matter of historical fact, at the very first moment the rejection occurred.

Instead, the search for justification hits on the justifying-sounding fact, "This speaker does not have a PhD."

But I also don’t have a PhD when I talk about human rationality, so why is the same objection not raised there?

And more to the point, if I had a PhD, people would not treat this as a decisive factor indicating that they ought to believe everything I say.  Rather, the same initial rejection would occur, for the same reasons; and the search for justification, afterward, would terminate at a different stopping point.

They would say, "Why should I believe you?  You’re just some guy with a PhD! There are lots of those.  Come back when you’re well-known in your field and tenured at a major university."

Continue reading "Is That Your True Rejection?" »

GD Star Rating
loading...

Underconstrained Abstractions

Followup toThe Weak Inside View

Saith Robin:

"It is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions.  To see if such things are useful, we need to vet them, and that is easiest "nearby", where we know a lot.  When we want to deal with or understand things "far", where we know little, we have little choice other than to rely on mechanisms, theories, and concepts that have worked well near.  Far is just the wrong place to try new things."

Well… I understand why one would have that reaction.  But I’m not sure we can really get away with that.

When possible, I try to talk in concepts that can be verified with respect to existing history.  When I talk about natural selection not running into a law of diminishing returns on genetic complexity or brain size, I’m talking about something that we can try to verify by looking at the capabilities of other organisms with brains big and small.  When I talk about the boundaries to sharing cognitive content between AI programs, you can look at the field of AI the way it works today and see that, lo and behold, there isn’t a lot of cognitive content shared.

But in my book this is just one trick in a library of methodologies for dealing with the Future, which is, in general, a hard thing to predict.

Let’s say that instead of using my complicated-sounding disjunction (many different reasons why the growth trajectory might contain an upward cliff, which don’t all have to be true), I instead staked my whole story on the critical threshold of human intelligence.  Saying, "Look how sharp the slope is here!" – well, it would sound like a simpler story.  It would be closer to fitting on a T-Shirt.  And by talking about just that one abstraction and no others, I could make it sound like I was dealing in verified historical facts – humanity’s evolutionary history is something that has already happened.

But speaking of an abstraction being "verified" by previous history is a tricky thing.  There is this little problem of underconstraint – of there being more than one possible abstraction that the data "verifies".

Continue reading "Underconstrained Abstractions" »

GD Star Rating
loading...