One of the more colorful vignettes in philosophy is Gibbard and Harper’s “Death in Damascus” case:
Consider the story of the man who met Death in Damascus. Death looked surprised, but then recovered his ghastly composure and said, ‘I am coming for you tomorrow’. The terrified man that night bought a camel and rode to Aleppo. The next day, Death knocked on the door of the room where he was hiding, and said I have come for you’.
‘But I thought you would be looking for me in Damascus’, said the man.
‘Not at all’, said Death ‘that is why I was surprised to see you yesterday. I knew that today I was to find you in Aleppo’.
That is, Death’s foresight takes into account any reactions to Death’s activities.
Now suppose you think that a large portion of the Great Filter lies ahead, so that almost all civilizations like ours fail to colonize the stars. This implies that civilizations almost never adopt strategies that effectively avert doom and allow colonization. Thus the mere fact that we adopt any purported Filter-avoiding strategy S is strong evidence that S won’t work, just as the fact that you adopt any particular plan to escape Death indicates that it will fail.
To expect S to work we would have to be very confident that we were highly unusual in adopting S (or any strategy as good as S), in addition to thinking S very good on the merits. This burden might be met if it was only through some bizarre fluke that S became possible, and a strategy might improve our chances even though we would remain almost certain to fail, but common features, such as awareness of the Great Filter, would not suffice to avoid future filters.
GD Star Rating
A comment by Anonymous on Three Worlds Collide:
After reading this story I feel myself agreeing with Eliezer more on his views and that seems to be a sign of manipulation and not of rationality.
Philosophy expressed in form of fiction seems to have a very strong effect on people – even if the fiction isn't very good (ref. Ayn Rand).
Robin has similar qualms:
Since people are inconsistent but reluctant to admit that fact, their moral beliefs can be influenced by which moral dilemmas they consider in what order, especially when written by a good writer. I expect Eliezer chose his dilemmas in order to move readers toward his preferred moral beliefs, but why should I expect those are better moral beliefs than those of all the other authors of fictional moral dilemmas?
If I'm going to read a literature that might influence my moral beliefs, I'd rather read professional philosophers and other academics making more explicit arguments.
I replied that I had taken considerable pains to set out the explicit arguments before daring to publish the story. And moreover, I had gone to considerable length to present the Superhappy argument in the best possible light. (The opposing viewpoint is the counterpart of the villain; you want it to look as reasonable as possible for purposes of dramatic conflict, the same principle whereby Frodo confronts the Dark Lord Sauron rather than a cockroach.)
Robin didn't find this convincing:
Continue reading "(Moral) Truth in Fiction?" »
GD Star Rating
It seems people are overconfident about their moral beliefs. But how should one reason and act if one acknowledges that one is uncertain about morality – not just applied ethics but fundamental moral issues? if you don't know which moral theory is correct?
It doesn't seem you can simply plug your uncertainty into expected utility decision theory and crank the wheel; because many moral theories state that you should not always maximize expected utility.
Even if we limit consideration to consequentialist theories, it still is hard to see how to combine them in the standard decision theoretic framework. For example, suppose you give X% probability to total utilitarianism and (100-X)% to average utilitarianism. Now an action might add 5 utils to total happiness and decrease average happiness by 2 utils. (This could happen, e.g. if you create a new happy person that is less happy than the people who already existed.) Now what do you do, for different values of X?
The problem gets even more complicated if we consider not only consequentialist theories but also deontological theories, contractarian theories, virtue ethics, etc. We might even throw various meta-ethical theories into the stew: error theory, relativism, etc.
I'm working on a paper on this together with my colleague Toby Ord. We have some arguments against a few possible "solutions" that we think don't work. On the positive side we have some tricks that work for a few special cases. But beyond that, the best we have managed so far is a kind of metaphor, which we don't think is literally and exactly correct, and it is a bit under-determined, but it seems to get things roughly right and it might point in the right direction:
Continue reading "Moral uncertainty – towards a solution?" »
GD Star Rating
Followup to: Joy in the Merely Good
Raise the topic of cryonics, uploading, or just medically extended lifespan/healthspan, and some bioconservative neo-Luddite is bound to ask, in portentous tones:
"But what will people do all day?"
They don't try to actually answer the question. That is not a bioethicist's role, in the scheme of things. They're just there to collect credit for the Deep Wisdom of asking the question. It's enough to imply that the question is unanswerable, and therefore, we should all drop dead.
That doesn't mean it's a bad question.
It's not an easy question to answer, either. The primary experimental result in hedonic psychology – the study of happiness – is that people don't know what makes them happy.
And there are many exciting results in this new field, which go a long way toward explaining the emptiness of classical Utopias. But it's worth remembering that human hedonic psychology is not enough for us to consider, if we're asking whether a million-year lifespan could be worth living.
Fun Theory, then, is the field of knowledge that would deal in questions like:
- "How much fun is there in the universe?"
- "Will we ever run out of fun?"
- "Are we having fun yet?"
- "Could we be having more fun?"
Continue reading "Prolegomena to a Theory of Fun" »
GD Star Rating
Richard Chappell rises to my challenge:
We're not doing the world any favours by populating the future with our primitive 20th(-21st) century minds. … So cryonicists must assume that it is better to extend an existing life than to create a new one. … Many people are (quite reasonably!) wedded to the particularities of their life and situation, … insofar as this newly awakened person would be enculturated into a new society, acquiring new values and life projects, they are effectively becoming a new and different person. But … then revival is unjustified: a better new life could be created 'from scratch', so to speak. So cryonics is (at best) only justified for people whose central concerns and life projects could continue to be fruitfully pursued upon revival in a transhuman society.
Imagine a volcano is about to destroy an island and we go to local villages telling the natives boats are waiting at the shore, and urging people to leave without delay. It would be odd to object saying their lives somewhere else will be different enough to make them different people, and so the world would be better off just raising new people to live in those other places. Perhaps you just want us to remind people that maybe they would really rather just die than live such a different life, but even that seems a bit odd.
Continue reading "Trade With The Future" »
GD Star Rating
In the early days of this blog, I would pick fierce arguments with Robin about the no-disagreement hypothesis. Lately, however, reflection on things like public reason have brought me toward agreement with Robin, or at least moderated my disagreement. To see why, it’s perhaps useful to take a look at the newspapers…
the pope said the book “explained with great clarity” that “an interreligious dialogue in the strict sense of the word is not possible.” In theological terms, added the pope, “a true dialogue is not possible without putting one’s faith in parentheses.”
What are we to make of a statement like this?
Continue reading "Beliefs Require Reasons, or: Is the Pope Catholic? Should he be?" »
GD Star Rating
Following the announcement last week that Oxford University’s controversial Biomedical Sciences building is now complete and will be open for business in mid-2009, the ethical issues surrounding the use of animals for scientific experimentation have been revisited in the media—see, for example, here , here, and here.
The number of animals used per year in scientific experiments worldwide has been estimated at 200 million—well in excess of the population of Brazil and over three times that of the United Kingdom. If we take the importance of an ethical issue to depend in part on how many subjects it affects, then, the ethics of animal experimentation at the very least warrants consideration alongside some of the most important issues in this country today, and arguably exceeds them in importance. So, what is being done to address this issue?
In the media, much effort seems to be devoted to discrediting concerns about animal suffering and reassuring people that animals used in science are well cared for, and relatively little effort is spent engaging with the ethical issues. However, it seems likely that no amount of reassurance about primate play areas and germ-controlled environments in Oxford’s new research lab will allay existing concerns about the acceptability of, for example, inducing heart failure in mice or inducing Parkinson’s disease in monkeys—particularly since scientists are not currently required to report exactly how much suffering their experiments cause to animals. Given the suffering involved, are we really sure that experimenting on animals is ethically justifiable?
In attempting to answer this question, it is disturbing to note some inconsistencies in popular views of science. Consider, for example, that by far the most common argument in favour of animal experimentation is that it is an essential part of scientific progress. As Oxford’s oft-quoted Professor Alastair Buchan reminds us, ‘You can’t make a head injury in a dish, you can’t create a stroke in a test tube, you can’t create a heart attack on a chip: it just doesn’t work’. Using animals, we are told, is essential if science is to progress. Since many people are apparently convinced by this argument, they must therefore believe that scientific progress is something worthwhile—that, at the very least, its value outweighs the suffering of experimental animals. And yet, at the same time, we are regularly confronted with the conflicting realisation that, far from viewing science as a highly valuable and worthwhile pursuit, the public is often disillusioned and exasperated with science. Recently, for example, people have expressed bafflement that scientists have spent time and money on seemingly trifling projects—such as working out the best way to swat a fly and discovering why knots form—and on telling us things that we already know: that getting rid of credit cards helps us spend less money, and that listening to very loud music can damage hearing. Why, when the public often seems to despair of science, do so many people appear to be convinced that scientific progress is so important that it justifies the suffering of millions of animals? Continue reading "Animal experimentation: morally acceptable, or just the way things always have been?" »
GD Star Rating
What we actually want often diverges from what we wish we wanted. One of the places where this conflict is clearest is in the features of others that attract us. We are attracted to many features, including features of bodies, minds, and social networks. We clearly put a large weight on body features, but we like to think we place more weight on other features, such as mental ones. When we see how much we actually care about bodies we are disturbed, and perceive a conflict between what we want and what we want to want. So why is there a conflict anyway – why are we built not to want to want what we want?
Consider that those with a better ability to distinguish a feature would naturally put more weight on that feature in when choosing. If there is a pile of fruit and I have a short time to grab some fruit before others take them all, then if I can’t see colors well I’ll put less emphasize on colors in my choice. After all, those who can see colors better will better be able to choose the ones with good colors. Similarly, the better I am at distinguishing smart people, the more emphasize I’d naturally place on smarts when choosing people.
It is pretty easy for most people to tell how pretty someone is, but it is harder to tell how smart they are. Having a high ability to tell how smart someone is says good things about you – in general it says you are pretty smart too. And thus the fact that you put a high weight on smarts also says good things about you. Since you have an interest in being thought well of, you also have an interest in being thought of as someone who puts a high weight on smarts.
And serving your interests, evolution may well have arranged your mind to fool others into thinking that you put more weight on smarts than you actually do. And this I suggest is the usual source of the conflict between what we want, and what we want to want. We want what is useful to us, but we want to want what makes us look good to others. We often fool ourselves into thinking that what we want to want is what we do want, and thereby also often fool others into thinking well of us.
Note that in the case considered here, of looks vs. smarts, it is not at all obvious that what we want to want is better morally that what we actually want. From a conversation with Katja Grace on this her birthday.
GD Star Rating
Continuation of: Grasping Slippery Things
Followup to: Possibility and Could-ness, Three Fallacies of Teleology
When I try to hit a reduction problem, what usually happens is that I "bounce" – that’s what I call it. There’s an almost tangible feel to the failure, once you abstract and generalize and recognize it. Looking back, it seems that I managed to say most of what I had in mind for today’s post, in "Grasping Slippery Things". The "bounce" is when you try to analyze a word like could, or a notion like possibility, and end up saying, "The set of realizable worlds [A’] that follows from an initial starting world A operated on by a set of physical laws f." Where realizable contains the full mystery of "possible" – but you’ve made it into a basic symbol, and added some other symbols: the illusion of formality.
There are a number of reasons why I feel that modern philosophy, even analytic philosophy, has gone astray – so far astray that I simply can’t make use of their years and years of dedicated work, even when they would seem to be asking questions closely akin to mine.
The proliferation of modal logics in philosophy is a good illustration of one major reason: Modern philosophy doesn’t enforce reductionism, or even strive for it.
Most philosophers, as one would expect from Sturgeon’s Law, are not very good. Which means that they’re not even close to the level of competence it takes to analyze mentalistic black boxes into cognitive algorithms. Reductionism is, in modern times, an unusual talent. Insights on the order of Pearl et. al.’s reduction of causality or Julian Barbour’s reduction of time are rare.
So what these philosophers do instead, is "bounce" off the problem into a new modal logic: A logic with symbols that embody the mysterious, opaque, unopened black box. A logic with primitives like "possible" or "necessary", to mark the places where the philosopher’s brain makes an internal function call to cognitive algorithms as yet unknown.
And then they publish it and say, "Look at how precisely I have defined my language!"
Continue reading "Against Modal Logics" »
GD Star Rating
This seems a deep insight simple enough to explain in a blog post (and so I’m probably not the first to see it): the self-indication approach to indexical uncertainty solves the time-asymmetry question in physics! To explain this, I must first explain time-asymmetry and indexical uncertainty.
A deep question in physics is time asymmetry – why doesn’t stuff happen as often "backwards" in time? We have no idea about the tiny CP-violation in particle physics, but all the other time asymmetries are thought to arise from a very-low early-universe entropy. The most popular explanation for this is inflation, especially eternal inflation, which says that any small space-time region satisfying certain conditions is connected to infinitely many large time-asymmetric regions much like what we see around us. Alas, the chance that any small region satisfies these inflation conditions is extremely small. As a recent paper puts it:
Initial conditions which give the big bang a thermodynamic arrow of time must necessarily be low entropy and therefore "rare." There is no way the initial conditions can be typical, or there would be no arrow of time, and this fact must apply to inflation and prevent it from representing "completely generic" initial conditions. … If you can regard the big bang as a fluctuation in a larger system it must be an exceedingly rare one to account for the observed thermodynamic arrow of time.
So the question of time-asymmetry reduces to this: why does the universe have enough independently variable small regions that at least one of them gives eternal inflation? That is: why is the universe so big?
Continue reading "Self-Indication Solves Time-Asymmetry" »
GD Star Rating