Tag Archives: Philosophy

No theory X in shining armour

A frequent topic on this blog is the likely trade-off between a higher population and a higher quality of life at some point in the future. Some people – often total utilitarians – are willing to accept a lower quality of life for our descendants if that means there can be more of them. Others – often average utilitarians – will accept a smaller population if it is required to improve quality of life for those who are left.

Both of these positions lead to unintuitive conclusions if taken to the extreme. On the one hand, total utilitarians would have to accept the ‘repugnant conclusion‘, that a very large number of individuals experiencing lives barely worth living, could be much better than a small number of people experiencing joyous lives. On the other hand, average utilitarians confront the ‘mere addition paradox’; adding another joyous person to the world would be undesirable so long as their life was a little less joyous than the average of those who already existed.

Derek Parfit, pioneer of these ethical dilemmas and author of the classic Reasons and Persons, strived to,

“develop a theory of beneficence – theory X he calls it – which is able to solve the Non-identity problem [1], which does not lead to the Repugnant Conclusion and which thus manages to block the Mere Addition Paradox, without facing other morally unacceptable conclusions. However, Parfit’s own conclusion was that he had not succeeded in developing such a theory.”

Such a ‘theory X’ would certainly be desirable. I am not keen to bite the bullet of either the ‘repugnant conclusion’ or ‘mere addition paradox’ if neither is required. Unfortunately, if like me, you were hoping that such a theory might be forthcoming, you can now give up waiting. I was recently surprised to learn that What should we do about future generations? Impossibility of Parfit’s Theory X by Yew-Kwang Ng (1989) demonstrated many years ago that theory X cannot exist. Continue reading "No theory X in shining armour" »

GD Star Rating
Tagged as: , , ,

Life after death for Pascal’s Wager?

You are probably familiar with Pascal’s Wager – the idea that it is worth believing in God in order to increase your probability of going to heaven and lower your probability of going to hell. More generally, for an expected utility maximiser it will always be worth doing something that offers any probability of an infinite utility, no matter how low that probability.

My impression is that most folks think this argument is nonsense. I am not so sure. I recently met Amanda Montgomery, who is at NYU studying the challenges that infinite values present for decision theory. In her view, nobody has produced a sound solution to Pascal’s Wager and other infinite ethics problems.

A common response, and one I had previously accepted, is that we also need to consider the possibility of a ‘professor God’ who rewards atheists and punishes believers. As long as you place some probability on this being the case, then being an atheist, as well as being a believer, appears to offer an infinite payoff. Therefore it doesn’t matter what you believe.

This logic relies on two premises. Firstly, that a*∞ = b*∞ = ∞ for any a > 0 and b > 0. Secondly, that in ranking expected utility outcomes, we should be indifferent between any two positive probabilities of an infinite utility, even if they are different. That would imply that a certainty of going to ‘Heaven’ was no more desirable than a one-in-a-billion chance. Amanda points out that while these statements may both be true, if you have any doubt that either is true (p < 1), then Pascal’s Wager appears to survive. The part of your ‘credence’ in which a higher probability of infinite utility should be preferred to a lower one will determine your decision and allow the tie to be broken. Anything that made you believe that some kinds of Gods were more likely or easy to appease than others, such as internal consistency or historical evidence, would ensure you were no longer indifferent between them.

Some might respond that it would not be possible to convert sincerely with a ‘Pascalian’ motivation. This might be true in the immediate term, but presumably given time you could put yourself in situations where you would be likely to develop a more religious disposition. Certainly, it would be worth investigating your capacity to change with an infinite utility on the line! And even if you could not sincerely convert, if you believed it was the right choice and had any compassion for others, it would presumably be your duty to set about converting others who could.

On top of the possibility that there is a God, it also seems quite imaginable to me that we are living in a simulation of some kind perhaps as a research project of a singularity that occurred in a parent universe. There is another possible motivation for running such simulations. I am told that if you accept certain decision theories, it would appear worthwhile for future creatures to run simulations of the past, and reward or punish the participants based on whether they acted in ways that were beneficial or harmful to beings expected to live in the future. On realising this, we would then be uncertain whether we were in such a simulation or not, and so would have an extra motivation to work to improve the future. However, given finite resources in their universe, these simulators would presumably not be able to dole out infinite utilities, and so would be dominated, in terms of expected utility, by any ‘supernatural’ creator that could.

Extending this point, Amanda notes the domination of ‘higher cardinality’ infinities over lower cardinalities. The slightest probability of an infinity-aleph-two utility would always trump a certain infinity-aleph-one. I am not sure what to do about that. The issue has hardly been researched by philosophers and seems like a promising area for high impact philosophy. I would appreciate anyone who can resolve these weird results so I can return to worrying about ordinary things!

GD Star Rating
Tagged as: ,

Does life flow towards flow?

Robin recently described how human brain ‘uploads’, even if forced to work hard to make ends meet, might nonetheless be happy and satisfied with their lives. Some humans naturally love their work, and if they are the ones who get copied, the happiness of emulations could be very high. Of course in Robin’s Malthusian upload scenario, evolutionary pressures towards high productivity are very strong, and so the mere fact that some people really enjoy work doesn’t mean that they will be the ones who get copied billions of times. The workaholics will only inherit the Earth if they are the best employees money can buy.

The broader question of whether creatures that are good at surviving, producing and then reproducing tend towards joy or misery is a crucial one. It helps answer whether it is altruistic to maintain populations of wild animals into the future, or an act of mercy to shrink their habitats. Even more importantly, it is the key to whether it is extremely kind or extremely cruel for humans to engage in panspermia and spread Malthusian life across the universe as soon as possible.

There is an abundance of evidence all around us in the welfare of humans and other animals that have to strive to survive in the environments they are adapted to, but no consensus on what that evidence shows. It is hard enough to tell whether another human has a quality of life better than no life at all, let alone determine the same for say, an octopus.

One of the few pieces of evidence I find compelling comes from Mihály Csíkszentmihályi research into the experience he calls ‘flow‘. His work suggests that humans are most productive, and also most satisfied, when they are totally absorbed in a clear but challenging task which they are capable of completing. The conditions suggested as being necessary to achieve ‘flow’ are

  1. “One must be involved in an activity with a clear set of goals. This adds direction and structure to the task.
  2. One must have a good balance between the perceived challenges of the task at hand and his or her ownperceived skills. One must have confidence that he or she is capable to do the task at hand.
  3. The task at hand must have clear and immediate feedback. This helps the person negotiate any changing demands and allows him or her to adjust his or her performance to maintain the flow state.”

Most work doesn’t meet these criteria and so ‘flow’ is not all that common, but it is amongst the best states of mind a human can hope for.

Some people are much more inclined to enter flow than others and if Csíkszentmihályi’s book is to be believed, they are ideal employees – highly talented, motivated and suited to their tasks. If this is the case, people predisposed to experience flow would be the most popular minds to copy as emulations and in the immediate term the flow-inspired workaholics would indeed come to dominate the Earth.

Of course, it could turn out that in the long run, once enough time has passed for evolution to shed humanity’s baggage, the creatures that most effectively do the forms of work that exist in the future will find life unpleasant. But our evolved capacity for flow in tasks that we are well suited for gives us a reason to hope that will not be the case. If it turns out that flow is a common experience for traditional hunter-gatherers then that would make me even more optimistic. And more optimistic again if we can find evidence for a similar experience in other species.

GD Star Rating
Tagged as: , ,

Your existence is informative

Warning: this post is technical.

Suppose you know that there are a certain number of planets, N. You are unsure about the truth of a statement Q. If Q is true, you put a high probability on life forming on any given arbitrary planet. If Q is false, you put a low probability on this. You have a prior probability for Q. So far you have not taken into account your observation that the planet you are on has life. How do you update on this evidence, to get a posterior probability for Q? Since your model just has a number of planets in it, with none labeled as ‘this planet’, you can’t update directly on ‘there is life on this planet’, by excluding worlds where ‘this planet’ doesn’t have life. And you can’t necessarily treat ‘this’ as an arbitrary planet, since you wouldn’t have seen it if it didn’t have life.

I have an ongoing disagreement with an associate who suggests that you should take ‘this planet has life’ into account by conditioning on ‘there exists a planet with life’. That is,

P(Q|there is life on this planet) = P(Q|there exists a planet with life).

Here I shall explain my disagreement.

Nick Bostrom argues persuasively that much science would be impossible if we treated ‘I observe X’ as ‘someone observes X’. This is basically because in a big world of scientists making measurements, at some point somebody will make most mistaken measurements. So if all you know when you measure the temperature of a solution to be 15 degrees is that you are not in a world where nobody ever measures its temperature to be 15 degrees, this doesn’t tell you much about the temperature.

You can add other apparently irrelevant observations you make at the same time – e.g. that the table is blue chipboard – in order to make your total observations less likely to arise once in a given world (at its limit, this is the suggestion of FNC). However it seems implausible that you should make different inferences from taking a measurement when you can also see a detailed but irrelevant picture at the same time than those you make with limited sensory input. Also the same problem re-emerges if the universe is supposed to be larger. Given that the universe is thought to be very, very large, this is a problem. Not to mention, it seems implausible that the size of the universe should greatly affect probabilistic judgements made about entities which are close to independent from most of the universe.

So I think Bostrom’s case is good. However I’m not completely comfortable arguing from the acceptability of something that we do (science) back to the truth of the principles that justify it. So I’d like to make another case against taking ‘this planet has life’ as equivalent evidence to ‘there exists a planet with life’.

Evidence is what excludes possibilities. Seeing the sun shining is evidence against rain, because it excludes the possible worlds where the sky is grey, which include most of those where it is raining. Seeing a picture of the sun shining is not much evidence against rain, because it excludes worlds where you don’t see such a picture, which are about as likely to be rainy or sunny as those that remain are.

Receiving the evidence ‘there exists a planet with life’ means excluding all worlds where all planets are lifeless, and not excluding any other worlds. At first glance, this must be different from ‘this planet has life’. Take any possible world where some other planet has life, and this planet has no life. ‘There exists a planet with life’ doesn’t exclude that world, while ‘this planet has life’ does. Therefore they are different evidence.

At this point however, note that the planets in the model have no distinguishing characteristics. How do we even decide which planet is ‘this planet’ in another possible world? There needs to be some kind of mapping between planets in each world, saying which planet in world A corresponds to which planet in world B, etc. As far as I can tell, any mapping will do, as long as a given planet in one possible world maps to at most one planet in another possible world. This mapping is basically a definition choice.

So suppose we use a mapping where in every possible world where at least one planet has life, ‘this planet’ corresponds to one of the planets that has life. See the below image.

Which planet is which?

Squares are possible worlds, each with two planets. Pink planets have life, blue do not. Define ‘this planet’ as the circled one in each case. Learning that there is life on this planet is equal to learning that there is life on some planet.

Now learning that there exists a planet with life is the same as learning that this planet has life. Both exclude the far righthand possible world, and none of the other possible worlds. What’s more, since we can change the probability distribution we end up with, just by redefining which planets are ‘the same planet’ across worlds, indexical evidence such as ‘this planet has life’ must be horseshit.

Actually the last paragraph was false. If in every possible world which contains life, you pick one of the planets with life to be ‘this planet’, you can no longer know whether you are on ‘this planet’. From your observations alone, you could be on the other planet, which only has life when both planets do. The one that is not circled in each of the above worlds. Whichever planet you are on, you know that there exists a planet with life. But because there’s some probability of you being on the planet which only rarely has life, you have more information than that. Redefining which planet was which didn’t change that.

Perhaps a different definition of ‘this planet’ would get what my associate wants? The problem with the last was that it no longer necessarily included the planet we are on. So what about we define ‘this planet’ to be the one you are on, plus a life-containing planet in all of the other possible worlds that contain at least one life-containing planet. A strange, half-indexical definition, but why not? One thing remains to be specified – which is ‘this’ planet when you don’t exist? Let’s say it is chosen randomly.

Now is learning that ‘this planet’ has life any different from learning that some planet has life? Yes. Now again there are cases where some planet has life, but it’s not the one you are on. This is because the definition only picks out planets with life across other possible worlds, not this one. In this one, ‘this planet’ refers to the one you are on. If you don’t exist, this planet may not have life. Even if there are other planets that do. So again, ‘this planet has life’ gives more information than ‘there exists a planet with life’.

You either have to accept that someone else might exist when you do not, or you have to define ‘yourself’ as something that always exists, in which case you no longer know whether you are ‘yourself’. Either way, changing definitions doesn’t change the evidence. Observing that you are alive tells you more than learning that ‘someone is alive’.

GD Star Rating
Tagged as: , , , , ,

Resolving Paradoxes of Intuition

Shelly Kagan gave a nice summary of some problems involved in working out whether death is bad for one. I agree with Robin’s response, and have posted before about some of the particular issues. Now I’d like to make a more general observation.

First I’ll summarize Kagan’s story. The problems are something like this. It seems like death is pretty bad. Thought experiments suggest that it is bad for the person who dies, not just their friends, and that it is bad even if it is painless. Yet if a person doesn’t exist, how can things be bad for them? Seemingly because they are missing out on good things, rather than because they are suffering anything. But it is hard to say when they bear the cost of missing out, and it seems like things that happen happen at certain times. Or maybe they don’t. But then we’d have to say all the people who don’t exist are missing out, and that would mean a huge tragedy is happening as long as those people go unconceived. We don’t think a huge tragedy is happening, so lets say it isn’t. Also we don’t feel too bad about people not being born earlier, like we do about them dying sooner. How can we distinguish these cases of deprivation from non-existence from the deprivation that happens after death? Not in any satisfactorily non-arbitrary way. So ‘puzzles still remain’.

This follows a pattern common to other philosophical puzzles. Intuitions say X sometimes, and not X other times. But they also claim that one should not care about any of the distinctions that can reasonably be made between the times when they say X is true and the times when they say X is false.

Intuitions say you should save a child dying in front of you. Intuitions say you aren’t obliged to go out of your way to protect a dying child in Africa. Intuitions also say physical proximity, likelihood of being blamed, etc shouldn’t be morally relevant.

Intuitions say you are the same person today as tomorrow. Intuitions say you are not the same person as Napoleon. Intuitions also say that whether you are the same person or not shouldn’t depend on any particular bit of wiring in your head, and that changing a bit of wiring doesn’t make you slightly less you.

Of course not everyone shares all of these intuitions (I don’t). But for those who do, there are problems. These problems can be responded to by trying to think of other distinctions between contexts that do seem intuitively legitimate, reframing an unintuitive conclusion to make it intuitive, or just accepting at least one of the unintuitive conclusions.

The first two solutions – finding more appealing distinctions and framings – seem a lot more popular than the third – biting a bullet. Kagan concludes that ‘puzzles remain’, as if this inconsistency is an apparent mathematical conflict that one can fully expect to eventually see through if we think about it right. And many other people have been working on finding a way to make these intuitions consistent for a while. Yet why expect to find a resolution?

Why not expect this contradiction to be like the one that arises if you claim that you like apples more than pears and also pears more than apples? There is no nuanced way to resolve the issue, except to give up at least one.  You can make up values, but sometimes they are just inconsistent. The same goes for evolved values.

From Kagan’s account of death, it seems likely that our intuitions are just inconsistent. Given natural selection, this is not particularly surprising. It’s no mystery how people could evolve to care about the survival of they and their associates, yet not to care about people who don’t exist. Even if people who don’t exist suffer the same costs from not existing. It’s also not surprising that people would come to believe their care for others is largely about the others’ wellbeing, not their own interests, and so believe that if they don’t care about a tragedy, there isn’t one. There might be some other resolution in the death case, but until we see one, it seems odd to expect one. Especially when we have already looked so hard.

Most likely, if you want a consistent position you will have to bite a bullet. If you are interested in reality, biting a bullet here shouldn’t be a last resort after searching every nook and cranny for a consistent and intuitive position. It is much more likely that humans have inconsistent intuitions about the value of life than that we have so far failed to notice some incredibly important and intuitive distinction in circumstances that drives our different intuitions. Why do people continue to search for intuitive resolutions to such problems? It could be that accepting an unintuitive position is easy, unsophisticated, unappealing to funders and friends, and seems like giving up. Is there something else I’m missing?

GD Star Rating
Tagged as: , ,

Why Is Death Bad?

Shelly Kagan considers: why is death bad?:

Maybe … death is bad for me in the comparative sense, because when I’m dead I lack life—more particularly, the good things in life. … Yet if death is bad for me, when is it bad for me? Not now. I’m not dead now. What about when I’m dead? But then, I won’t exist. … Isn’t it true that something can be bad for you only if you exist? Call this idea the existence requirement. …

Rejecting the existence requirement has some implications that are hard to swallow. For if nonexistence can be bad for somebody even though that person doesn’t exist, then nonexistence could be bad for somebody who never exists. … Let’s call him Larry. Now, how many of us feel sorry for Larry? Probably nobody. But if we give up on the existence requirement, we no longer have any grounds for withholding our sympathy from Larry. I’ve got it bad. I’m going to die. But Larry’s got it worse: He never gets any life at all.

Moreover, there are a lot of merely possible people. How many? … You end up with more possible people than there are particles in the known universe, and almost none of those people get to be born. If we are not prepared to say that that’s a moral tragedy of unspeakable proportions, we could avoid this conclusion by going back to the existence requirement. …

If I accept the existence requirement, death isn’t bad for me, which is really rather hard to believe. Alternatively, I can keep the claim that death is bad for me by giving up the existence requirement. But then I’ve got to say that it is a tragedy that Larry and the other untold billion billion billions are never born. And that seems just as unacceptable. (more)

Imagine a couple had been looking forward to raising a child with their combined genetic features, but then discovered that one of them was infertile. In this case they might mourn the loss of a hoped-for child who would in fact never exist. Not just the loss to themselves, but the loss to the child itself. And their friends might mourn with them.

But since this is a pretty unusual situation, we humans have not evolved much in the way of emotional habits and capacities to deal specifically with it. Our emotional habits are focused on the kinds of losses which people around us more commonly suffer and complain. So naturally we aren’t in the habit of taking time out to mourn the loss of a specific Larry. But there are lots of people far from us whose losses we don’t mourn. That hardly means such losses don’t exist.

It seems to me Kagan’s attitude above amounts to insisting that is impossible to imagine a vastly better state (of the universe) than our own. After all, if a vastly better state that ours is “possible”, then the fact that our actual state is not that possible state is a terrible “tragedy”, which he will just not allow.

But if possible states can vary greatly in the amount of good they would embody, then it is almost certain that the good of our actual state holds far less than the maximum good state. This only seems to me a “tragedy”, however, if we could have done something specific to achieve that much better state.

If we can’t see what we could do to allow substantially more creatures to exist, then it isn’t a tragedy that they don’t exist. It is a loss relative to an ideal world where they could exist, but it isn’t a tragedy not to know to create implausibly ideal worlds.

GD Star Rating
Tagged as: , ,

Chalmers Reply #2

In April 2010 I commented on David Chalmers’ singularity paper:

The natural and common human obsession with how much [robot] values differ overall from ours distracts us from worrying effectively. … [Instead:]
1. Reduce the salience of the them-us distinction relative to other distinctions. …
2. Have them and us use the same (or at least similar) institutions to keep peace among themselves and ourselves as we use to keep peace between them and us.

I just wrote a 3000 word new comment on this paper, for a journal. Mostly I complain Chalmers didn’t say much beyond what we should have already known. But my conclusion is less meta:

The most robust and promising route to low cost and mutually beneficial mitigation of these [us vs. superintelligence] conflicts is strong legal enforcement of retirement and bequest contracts. Such contracts could let older generations directly save for their later years, and cheaply pay younger generations to preserve old loyalties. Simple consistent and broad-based enforcement of these and related contracts seem our best chance to entrench the enforcement of such contracts deep in legal practice. Our descendants should be reluctant to violate deeply entrenched practices of contract law for fear that violations would lead to further unraveling of contract practice, which threatens larger social orders built on contract enforcement.

As Chalmers notes in footnote 19, this approach is not guaranteed to work in all possible scenarios. Nevertheless, compare it to the ideal Chalmers favors:

AI systems such that we can prove they will always have certain benign values, and such that we can prove that any systems they will create will also have those values, and so on … represents a sort of ideal that we might aim for (p.35).

Compared to the strong and strict controls and regimentation required to even attempt to prove that values disliked by older generations could never arise in any later generations, enforcing contracts where older generations pay younger generations to preserve specific loyalties seems to me a far easier, safer and more workable approach, with many successful historical analogies on which to build.

GD Star Rating
Tagged as: , ,

Sleeping Beauty’s Assistant

The Sleeping Beauty problem:

Sleeping Beauty goes into an isolated room on Sunday and falls asleep. Monday she awakes, and then sleeps again Monday night. A fair coin is tossed, and if it comes up heads then Monday night Beauty is drugged so that she doesn’t wake again until Wednesday. If the coin comes up tails, then Monday night she is drugged so that she forgets everything that happened Monday – she wakes Tuesday and then sleeps again Tuesday night. When Beauty awakes in the room, she only knows it is either heads and Monday, tails and Monday, or tails and Tuesday. Heads and Tuesday is excluded by assumption. The key question: what probability should Beauty assign to heads when she awakes?

The literature is split: most answer 1/3, but some answer 1/2 (and a few give other answers). Here an interesting variation:

Imagine sleeping beauty has a (perhaps computer-based) assistant. Like Beauty, the assistant’s memory of Monday is erased Monday night, but unlike Beauty, she is not kept asleep on Tuesday, even if the coin comes up heads. So when Beauty is awake her assistant is also awake, and has exactly the same information about the coin as does beauty. But the assistant also has the possibility of waking up to see Beauty asleep, in which case the assistant can conclude that it is definitely heads on Tuesday. The key question: should Beauty’s beliefs differ from her assistant’s?

Since the assistant knows that she might awake to see Beauty asleep, and conclude heads for sure, the fact that the assistant does not see this clearly gives her info. This info should shift her beliefs away from heads, with the assistant’s new belief in heads being less than half. (If she initially assigned an equal chance to waking Monday versus Tuesday, her new belief in heads is one third.) And since when Beauty awakes she seems to have exactly the same info as her assistant, Beauty should also believe less than half.

I can’t be bothered to carefully read the many papers on the Sleeping Beauty problem to see just how original this variation is. Katja tells me it is a variation on an argument of hers, and I believe her. But I’m struck by a similarity to my argument for common priors based on the imagined beliefs of a “pre-agent” who existed before you, uncertain about your future prior:

Each agent is asked to consider the information situation of a “pre-agent” who is not sure which agents will get which priors. Each agent can have a different pre-agent, but each agent’s prior should be consistent with his pre-agent’s “pre-prior,” in the sense that the prior equals the pre-prior conditional on the key piece of information that distinguishes them: which agents actually get which priors. The main result is that an agent can only have a different prior if his pre-agent believed the process that produced his prior was special. (more)

I suggest we generalize these examples to a rationality principle:

The Assistant Principle: Your actual beliefs should match those of some imaginable rational (perhaps computer-based) assistant who lived before you, who will live after you, who would have existed in many other states than you, and who came to learn all you know when you learned it, but was once highly uncertain.

That is, there is something wrong with your beliefs if there is no imaginable assistant who would now have exactly your beliefs and info, but who also would have existed before you, knowing less, and has rational beliefs in all related situations. Your beliefs are supposed to be about the world out there, and only indirectly about you via your information. If your beliefs could only make sense for someone who existed when and where you exist, then they don’t actually make sense.

Added 8a: Several helpful commenters show that my variation is not original – which I consider to be a very good thing. I’m happy to hear that academia has progressed nicely without me! 🙂

GD Star Rating
Tagged as: ,

What Is “Belief”?

Richard Chappell has a couple of recent posts on the rationality of disagreement. As this fave topic of mine appears rarely in the blogsphere, let me not miss this opportunity to discuss it.

In response to the essential question “why exactly should I believe I am right and you are wrong,” Richard at least sometimes endorses the answer “I’m just lucky.” This puzzled me; on what basis could you conclude it is you and not the other person who has made a key mistake? But talking privately with Richard, I now understand that he focuses on what he calls “fundamental” disagreement, where all parties are confident they share the same info and have made no analysis mistakes.

In contrast, my focus is on cases where parties assume they would agree if they shared the same info and analysis steps.  These are just very different issues, I think.  Unfortunately, they appear to be more related than they are, because of a key ambiguity in what we mean by “belief.”  Many common versions of this concept do not “carve nature at the relevant joints.”  Let me explain.

Every decision we make is influenced by a mess of tangled influences that can defy easy classification. But one important distinction, I think, is between (A) influences that come most directly from inside of us, i.e., from who we are, and (B) influences that come most directly from outside of us. (Yes, of course, indirectly each influence can come from everywhere.) Among outside influences, we can also usefully distinguish between (B1) influences which we intend to track the particular outside things that we are reasoning about, from (B2) influences that come from rather unrelated sources.

For example, our attitude toward rain soon might be influenced by (A) our dark personality, that makes us expect dark things, and from (B1) seeing dark clouds, which is closely connected to the processes that make rain.  Our attitude toward rain might also be influenced by (B2) broad social pressures to make weather forecasts match the emotional mood of our associates, even when this has little relation to if there will be rain.

Differing attitudes between people on rain soon is mainly problematic regarding (B1) aspects of our mental attitudes which we intend to have track that rain. Yes of course if we are different inside, and are ok with remaining different in such ways, then it is ok for our decisions to be influenced by such differences. But such divergence is not so ok regarding the aspects of our minds that we intend to track things outside our minds.

Imagine that two minds intend for certain aspects of their mental states to track the same outside object, but then they find consistent or predictable differences between their designated mental aspects. In this case these two minds may suspect that their intentions have failed. That is, their disagreement may be evidence suggesting that for at least one of them other influences have contaminated mental aspects that person had intended would just track that outside object.

This is to me the interesting question in rationality of disagreement; how do we best help our minds to track the world outside us in the face of apparent disagreements? This is just a very different question from what sort of internal mental differences we are comfortable with having and acknowledging.

Unfortunately most discussion about “beliefs” and “opinions” are ambiguous regarding whether those who hold such things intend for them to just be mental aspects that track outside objects, or whether such things are intended to also reflect and express key internal differences. Do you want your “belief” in rain to just track the chance it will rain, or do you also want it to reflect your optimism toward life, your social independence, etc.?  Until one makes more clear what mental aspects exactly are referred to by the word “belief”, it seem very hard to answer such questions.

This ambiguity also clouds our standard formal theories. Let me explain.  In standard expected-utility decision theory, the two big influences on actions are probabilities and utilities, with probabilities coming from a min-info “prior” plus context-dependent info. Most econ models of decision making assume that all decision makers use expected utility and have the same prior. For example, agents might start with the same prior, get differing info about rain, take actions based on their differing info and values, and then change their beliefs about rain after seeing the actions of others. In such models, info and thus probability is (B1) what comes from outside agents to influence their decisions, while utility (A) comes from inside. Each probability is designed to be influenced only by the thing it is “about,” minimizing influence from (A) internal mental features or (B2) unrelated outside sources.

In philosophy, however, it is common to talk about the possibility that different people have differing priors. Also, for every set of consistent decisions one could make, there are an infinite number of different pairs of probabilities and utilities that produce those decisions. So one can actually model any situation with several expected-utility folks making decisions as either one with common priors or with uncommon priors.

Thus in contrast to the practice of most economists, philosophers’ use of “belief” (and “probability” and “prior”) confuses or mixes (A) internal and (B) external sources of our mental states. Because of this, it seems pointless for me to argue with philosophers about whether rational priors are common, or whether one can reasonably have differing “beliefs” given the same info and no analysis mistakes. We would do better to negotiate clearer language to talk about the parts of our mental states that we intend to track what our decisions are about.

Since I’m an economist, I’m comfortable with the usual econ habit of using “probability” to denote such outside influences intended to track the objects of our reasoning.  (Such usage basically defines priors to common.) But I’m willing to cede words like “probability”, “belief” or “opinion” to other purposes, if other important connotations need to be considered.

However, somewhere in our lexicon for discussing mental states we need words to refer to something like what econ models usually mean by “probabilities”, i.e., aspects of our mental states that we intend to track the objects of our reasoning, and to be minimally influenced by other aspects of our mental states.

(Of course all this can be applied to “beliefs” about our own minds, if we consider influences coming from our minds as if it were something outside.)

GD Star Rating
Tagged as: , ,

Bad News: Kant & Bets

The famous philosopher Kant saw bets as encouraging thoughtfulness and discouraging self-deception:

The usual touchstone of whether what someone asserts is mere persuasion or at least a subjective conviction, i.e., firm belief, is betting. Often someone pronounces his propositions with such confident and inflexible defiance that he seems to have entirely laid aside all concern for error. A bet disconcerts him. Sometimes he reveals that he is persuaded enough for one ducat but not for ten. For he would happily bet one, but at ten he suddenly becomes aware of what he had not previously noticed, namely that it is quite possible that he has erred. (Critique of Pure Reason, A824/B852; more; HT Tyler)

If we were to see life out there in the universe, at or below our level of development, that would be bad news regarding our future.  It would suggest that more of the great filter that stands between dead matter and expanding civilization lies ahead of our place on that path. Similarly, it is bad news to hear that Kant had a high opinion of the accuracy advantages of bets.  Let me explain.

I hope for a future where betting markets are a commonly used mechanism to create official consensus beliefs, but I must explain the fact that they are not already often used this way.  What barriers have stood in their way? One barrier is widespread skepticism about bet accuracy. But hearing of Kant’s well-known position reduces my estimate of this barrier; many respected people have long respected bet accuracy. So I must therefore increase my estimate of the difficulty of other barriers.  Alas, since skepticism about accuracy seems one of the easiest barriers to overcome, via track records and lab experiments, I must increase my estimate of the overall difficulty of my goal.  I’ll keep trying though.

GD Star Rating
Tagged as: , ,