Category Archives: Epistemology

The SAEE: who was right?

Bryan Caplan argues that economists mostly agree with one another, compared to the general public, and reports results from the Survey of Americans and Economists on the Economy (SAEE):

The leading correlates of economists’ disagreement are political-ideology and, to a lesser extent, party affiliation. Liberal Democratic and conservative Republican economists disagree in expected ways about taxes, regulation, excessive profits and executive pay, and some employment-related issues. Conservative economists are also markedly more optimistic about the country’s economic future. Note, however, that there is little evidence of an ideological divide over the economy’s past or present performance. Economists
across the political spectrum can largely agree about the path of inequality, real income, and real wages over the past two decades.

I don’t find agreement about the past very comforting: the point of economic advice is to deliver good consequences in the future. However, I would point out that disagreements about predictions are an opportunity for retrospective assessment. Indeed, when Bryan’s paper was published, in 2002, the 5 year timeline of the predictions had already come and gone. But there’s nothing stopping us from checking now. [Note, I prepared this post up until this point with the intention of posting it before peeking at the data.] Results below the fold.

Continue reading "The SAEE: who was right?" »

GD Star Rating

Future Filter Fatalism

One of the more colorful vignettes in philosophy is Gibbard and Harper’s “Death in Damascus” case:

Consider the story of the man who met Death in Damascus. Death looked surprised, but then recovered his ghastly composure and said, ‘I am coming for you tomorrow’. The terrified man that night bought a camel and rode to Aleppo. The next day, Death knocked on the door of the room where he was hiding, and said I have come for you’.

‘But I thought you would be looking for me in Damascus’, said the man.

‘Not at all’, said Death ‘that is why I was surprised to see you yesterday. I knew that today I was to find you in Aleppo’.

That is, Death’s foresight takes into account any reactions to Death’s activities.

Now suppose you think that a large portion of the Great Filter lies ahead, so that almost all civilizations like ours fail to colonize the stars. This implies that civilizations almost never adopt strategies that effectively avert doom and allow colonization. Thus the mere fact that we adopt any purported Filter-avoiding strategy S is strong evidence that S won’t work, just as the fact that you adopt any particular plan to escape Death indicates that it will fail.

To expect S to work we would have to be very confident that we were highly unusual in adopting S (or any strategy as good as S), in addition to thinking S very good on the merits. This burden might be met if it was only through some bizarre fluke that S became possible, and a strategy might improve our chances even though we would remain almost certain to fail, but common features, such as awareness of the Great Filter, would not suffice to avoid future filters.

GD Star Rating
Tagged as: , ,

Ignorance About Intuitions

In common usage, intuitions lead us to believe things without being able to articulate evidence or reasons for those beliefs. Wikipedia.

I’m not offering you a phony seventeen-step “proof that murder is normally wrong.”  Instead, I begin with concrete, specific cases where morality is obvious, and reason from there.  Bryan Caplan.

My debate with Bryan Caplan made me reflect again on our differing attitudes toward intuition.  While we still differ, Bryan has greatly influenced my thinking.

For each of our beliefs, we can ask our mind to give our "reasons" for that belief.  Our minds usually then offer reasons, though we usually don't know how much those reasons have to do with the actual causes of our belief.  We can often test those reasons through criticism, increasing confidence when criticism is less effective than expected, and decreasing confidence when criticism is more effective than expected.

For some of our beliefs, our minds don't offer much in the way of reasons.  We say these beliefs are more "intuitive."  In a hostile debating context this response can seem suspicious; you might expect one side in a debate to refuse to offer reasons just when they had already tested those reasons against criticism, and found them wanting.  That is, we might expect a debater to pretend he didn't have any reasons when he knew his reasons were bad. 

But this doesn't obviously support much distrust of our own intuitive beliefs.  Not only is our internal mind not obviously like a hostile debating context, but we must admit that our minds are built so that the vast majority of our thinking is unconscious.  It is unreasonable to expect our minds to be able to tell us much in the way of reasons for most of our beliefs. 

Continue reading "Ignorance About Intuitions" »

GD Star Rating
Tagged as:

Who Loves Truth Most?

Who loves cars most?  Most people like cars, but the folks most vocal in their enthusiasm for cars are car sellers; they pay millions for ads gushing about how much their engineers love designing cars, their factory workers love building them, etc.  The next most vocal are probably car collectors, tinkerers, and racers; they'll bend your ear off about their car hobby.  Also vocal are folks visibly concerned that the poor don't have enough cars. 

But if you want to find the folks who most love cars for their main purpose, getting folks around in their daily lives, you'll have to filter out the sellers, hobbyists, and do-gooders to find ordinary people who just love their cars.  For the most part, car companies love to sell cars to make cash, car hobbyists love to use cars to show off their personal abilities, and do-gooders use cars to show off their compassion.  By comparison, those who just love to drive from point A to B don't shout much.

Truth loving is similar.  Most folks say they prefer truth, but the folks most vocal about loving "truth" are usually selling something.  For preachers, demagogues, and salesmen of all sorts, the wilder their story, the more they go on about how they love truth.  The next most vocal in their enthusiasm for truth are those who, like car hobbyists, use public demonstrations of truth-finding to show off personal abilities.  Academics, gamers, poker players, and amateur intellectuals of all sorts are proud of the fact that their efforts reveal truth, and they make sure you notice their proficiencies. And do-gooders earnestly talk about the importance of everyone understanding the truth of the uninsured, the illiterate, etc.

Continue reading "Who Loves Truth Most?" »

GD Star Rating
Tagged as: , , , ,

Share likelihood ratios, not posterior beliefs

When I think of Aumann's agreement theorem, my first reflex is to average.  You think A is 80% likely; my initial impression is that it's 60% likely.  After you and I talk, maybe we both should think 70%.  "Average your starting beliefs", or perhaps "do a weighted average, weighted by expertise" is a common heuristic.

But sometimes, not only is the best combination not the average, it's more extreme than either original belief.

Let's say Jane and James are trying to determine whether a particular coin is fair.  They both think there's an 80% chance the coin is fair.  They also know that if the coin is unfair, it is the sort that comes up heads 75% of the time.

Jane flips the coin five times, performs a perfect Bayesian update, and concludes there's a 65% chance the coin is unfair.  James flips the coin five times, performs a perfect Bayesian update, and concludes there's a 39% chance the coin is unfair.  The averaging heuristic would suggest that the correct answer is between 65% and 39%.  But a perfect Bayesian, hearing both Jane's and James's estimates – knowing their priors, and deducing what evidence they must have seen - would infer that the coin was 83% likely to be unfair.  [Math footnoted.]

Perhaps Jane and James are combining this information in the middle of a crowded tavern, with no pen and paper in sight.  Maybe they don't have time or memory enough to tell each other all the coins they observed.  So instead they just tell each other their posterior probabilities – a nice, short summary for a harried rationalist pair.  Perhaps this brevity is why we tend to average posterior beliefs.

However, there is an alternative.  Jane and James can trade likelihood ratios.  Like posterior beliefs, likelihood ratios are a condensed summary; and, unlike posterior beliefs, sharing likelihood ratios actually works.

Continue reading "Share likelihood ratios, not posterior beliefs" »

GD Star Rating
Tagged as: , ,

Moral uncertainty – towards a solution?

It seems people are overconfident about their moral beliefs.  But how should one reason and act if one acknowledges that one is uncertain about morality – not just applied ethics but fundamental moral issues? if you don't know which moral theory is correct?

It doesn't seem you can simply plug your uncertainty into expected utility decision theory and crank the wheel; because many moral theories state that you should not always maximize expected utility.

Even if we limit consideration to consequentialist theories, it still is hard to see how to combine them in the standard decision theoretic framework.  For example, suppose you give X% probability to total utilitarianism and (100-X)% to average utilitarianism.  Now an action might add 5 utils to total happiness and decrease average happiness by 2 utils.  (This could happen, e.g. if you create a new happy person that is less happy than the people who already existed.)  Now what do you do, for different values of X?

The problem gets even more complicated if we consider not only consequentialist theories but also deontological theories, contractarian theories, virtue ethics, etc.  We might even throw various meta-ethical theories into the stew: error theory, relativism, etc.

I'm working on a paper on this together with my colleague Toby Ord.  We have some arguments against a few possible "solutions" that we think don't work.  On the positive side we have some tricks that work for a few special cases.  But beyond that, the best we have managed so far is a kind of metaphor, which we don't think is literally and exactly correct, and it is a bit under-determined, but it seems to get things roughly right and it might point in the right direction:

Continue reading "Moral uncertainty – towards a solution?" »

GD Star Rating
Tagged as: , ,

Chaotic Inversion

I was recently having a conversation with some friends on the topic of hour-by-hour productivity and willpower maintenance – something I’ve struggled with my whole life.

I can avoid running away from a hard problem the first time I see it (perseverance on a timescale of seconds), and I can stick to the same problem for years; but to keep working on a timescale of hours is a constant battle for me.  It goes without saying that I’ve already read reams and reams of advice; and the most help I got from it was realizing that a sizable fraction other creative professionals had the same problem, and couldn’t beat it either, no matter how reasonable all the advice sounds.

"What do you do when you can’t work?" my friends asked me.  (Conversation probably not accurate, this is a very loose gist.)

And I replied that I usually browse random websites, or watch a short video.

"Well," they said, "if you know you can’t work for a while, you should watch a movie or something."

"Unfortunately," I replied, "I have to do something whose time comes in short units, like browsing the Web or watching short videos, because I might become able to work again at any time, and I can’t predict when -"

And then I stopped, because I’d just had a revelation.

Continue reading "Chaotic Inversion" »

GD Star Rating

Beliefs Require Reasons, or: Is the Pope Catholic? Should he be?

In the early days of this blog, I would pick fierce arguments with Robin about the no-disagreement hypothesis.  Lately, however, reflection on things like public reason have brought me toward agreement with Robin, or at least moderated my disagreement.  To see why, it’s perhaps useful to take a look at the newspapers

the pope said the book “explained with great clarity” that “an interreligious dialogue in the strict sense of the word is not possible.” In theological terms, added the pope, “a true dialogue is not possible without putting one’s faith in parentheses.”

What are we to make of a statement like this?

Continue reading "Beliefs Require Reasons, or: Is the Pope Catholic? Should he be?" »

GD Star Rating
Tagged as: , , ,

Aiming at the Target

Previously in seriesBelief in Intelligence

Previously, I spoke of that very strange epistemic position one can occupy, wherein you don’t know exactly where Kasparov will move on the chessboard, and yet your state of knowledge about the game is very different than if you faced a random move-generator with the same subjective probability distribution – in particular, you expect Kasparov to win.  I have beliefs about where Kasparov wants to steer the future, and beliefs about his power to do so.

Well, and how do I describe this knowledge, exactly?

In the case of chess, there’s a simple function that classifies chess positions into wins for black, wins for white, and drawn games.  If I know which side Kasparov is playing, I know the class of chess positions Kasparov is aiming for.  (If I don’t know which side Kasparov is playing, I can’t predict whether black or white will win – which is not the same as confidently predicting a drawn game.)

More generally, I can describe motivations using a preference ordering. When I consider two potential outcomes, X and Y, I can say that I prefer X to Y; prefer Y to X; or find myself indifferent between them. I would write these relations as X > Y; X < Y; and X ~ Y.

Suppose that you have the ordering A < B ~ C < D ~ E. Then you like B more than A, and C more than A.  {B, C}, belonging to the same class, seem equally desirable to you; you are indifferent between which of {B, C} you receive, though you would rather have either than A, and you would rather have something from the class {D, E} than {B, C}.

When I think you’re a powerful intelligence, and I think I know something about your preferences, then I’ll predict that you’ll steer reality into regions that are higher in your preference ordering.

Continue reading "Aiming at the Target" »

GD Star Rating

Belief in Intelligence

Previously in seriesExpected Creative Surprises

Since I am so uncertain of Kasparov’s moves, what is the empirical content of my belief that "Kasparov is a highly intelligent chess player"?  What real-world experience does my belief tell me to anticipate?  Is it a cleverly masked form of total ignorance?

To sharpen the dilemma, suppose Kasparov plays against some mere chess grandmaster Mr. G, who’s not in the running for world champion.  My own ability is far too low to distinguish between these levels of chess skill.  When I try to guess Kasparov’s move, or Mr. G’s next move, all I can do is try to guess "the best chess move" using my own meager knowledge of chess.  Then I would produce exactly the same prediction for Kasparov’s move or Mr. G’s move in any particular chess position.  So what is the empirical content of my belief that "Kasparov is a better chess player than Mr. G"?

Continue reading "Belief in Intelligence" »

GD Star Rating