Write Your Hypothetical Apostasy

Let's say you have been promoting some view (on some complex or fraught topic – e.g. politics, religion; or any "cause" or "-ism") for some time.  When somebody criticizes this view, you spring to its defense.  You find that you can easily refute most objections, and this increases your confidence.  The view might originally have represented your best understanding of the topic.  Subsequently you have gained more evidence, experience, and insight; yet the original view is never seriously reconsidered.  You tell yourself that you remain objective and open-minded, but in fact your brain has stopped looking and listening for alternatives.

Here is a debiasing technique one might try: writing a hypothetical apostasy.  Remind yourself before you start that unless you later choose to do so, you will never have to show this text to anyone.

Imagine, if you will, that the world's destruction is at stake and the only way to save it is for you to write a one-pager that convinces a jury that your old cherished view is mistaken or at least seriously incomplete.  The more inadequate the jury thinks your old cherished view is, the greater the chances that the world is saved.  The catch is that the jury consists of earlier stages of yourself (such as yourself such as you were one year ago).  Moreover, the jury believes that you have been bribed to write your apostasy; so any assurances of the form "trust me, I am older and know better" will be ineffective.  Your only hope of saving the world is by writing an apostasy that will make the jury recognize how flawed/partial/shallow/juvenile/crude/irresponsible/incomplete and generally inadequate your old cherished view is.

(If anybody tries this, feel free to comment below on whether you found the exersise fruitful or not – but no need to state which specific view you were considering or how it changed.)

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • Russell Wallace

    An interesting idea, and I agree it could be valuable if one could pull it off. But is that possible, or would it necessarily be framed within the same assumptions as one’s original beliefs and thereby miss the mark, and in failing, merely confirm the original beliefs?

    That’s not a rhetorical question; it comes from having written a non-hypothetical apostasy a while ago:
    http://www.sl4.org/archive/0608/15606.html
    My ability to write that derived from stepping outside the box and understanding not only that I’d been mistaken, not only what the fallacies were, but also why I had believed them. I don’t think I could possibly have written it before.

    But that doesn’t necessarily mean other people can’t, and I’d be interested in anyone’s experiences trying this.

    However I disagree with Nick in that I think specifics are important. By the same token, if it’s a controversial topic, we should undertake to refrain from arguing the topic itself in this thread.

  • Kazuo Thow

    Thank you for sharing the technique – it may prove very useful.

    What I find interesting about it is that, in an upload society, it would likely be feasible to actually present such a hypothetical apostasy to a jury of your past selves. One need only make occasional backup copies of oneself and un-freeze them when a sufficiently important “crisis of faith” situation arises. Of course, to bridge the gap between past selves and the present self, it would be preferable to not bring back the jury from too long ago. Mind grafts a la Diaspora seem unlikely to work between significantly different personalities. But I’m doing little more than speculation here; such a method may not even be necessary in a highly advanced society of machine minds.

  • Russell Wallace

    In the case of a hypothetical apostasy, would one’s present self not make an equally good juror?

  • Kazuo Thow

    Not necessarily. At least the way Nick Bostrom proposed it above, it’s about imagining a jury of past selves, and the process of doing so carries a nontrivial chance of introducing new biases of an entirely different sort (anchoring on the new hypothetically adopted view, over-estimating the ease of persuading one’s past selves, etc.) Of course, I’m not saying that talking to a real jury of past selves is entirely without issues, but it may prove useful in a subset of all situations in which this sort of de-biasing is needed.

  • billswift

    Marc Steigler wrote something similar, but more concise, two decades ago. The epigraph to one chapter in “David’s Sling” reads:
    “How emotionally entangled are you with your point of view?
    Test yourself – defend an opposing view, believing your life depends upon it”.

  • http://profile.typekey.com/nickbostrom/ Nick Bostrom

    Russell:
    What kind of specifics do you think are important in this context? (It’s easier to motivate oneself to write a very short text. I assume that a jury of recent past selves would easily be able to understand one’s arguments; so long explanations would not be needed. If one does succeed in opening up one’s mind to some wiser alternative, then the details can be worked out later.)

    I guess a jury of recent past selves is preferable to one of current self – in order to create a little more psychological distance between the speaker and the imaginary audience.

    But it’s not as if this technique has been thoroughly tested and optimized. It would be interesting to get some data, even if merely anecdotal.

    Btw, one theoretical rationale for trying this is that there is some empirical evidence that when people are assigned the task of writing an essay defending some (randomly assigned) position X, they tend to become more favorably inclined to X afterwards. Since we are constantly defending our own views, one might need to counteract this effect by writing something against these views.

  • http://profile.typekey.com/nickbostrom/ Nick Bostrom

    Bill: Thanks; yes that’s similar and shorter. The twist here is that you should try to persuade (recent versions of) *yourself*. I think this leads you to search for true reasons for changing your mind rather than reasons that are rhetorically effective.

  • Russell Wallace

    What kind of specifics do you think are important in this context?

    I would be interested in finding out whether one can, and hopefully get some hints on how to: understand arguments outside one’s own side’s frame of reference; understand arguments in terms the other side’s supporters would agree with; understand the most basic assumptions on which one’s own arguments rest, and all of one’s motives for believing one’s own arguments.

    I think reference to actual text is helpful for these – it’s not a problem if the text is very short.

  • Manon de Gaillande

    I used to hold a view very strongly, but suspected it was mistaken; however, my past self expected my current self to doubt this belief for irrational reasons, and had therefore resolved to keep the belief. I therefore only allowed myself to drop the belief when I had arguments that would convice my past self, rather than only myself. Obviously I’m not sure I did it right, since my past self can’t answer me, but it does seem to have helped. Doing the same with currently cherished beliefs, rather than formerly cherished beliefs as I did, sounds like a good idea.

  • http://profile.typekey.com/nickbostrom/ Nick Bostrom

    Russell: Ok, if that’s what you mean then I don’t think we disagree.

    - However, in this context I would be wary of thinking in terms of “sides” and “supporters”. The idea here is not to “try to be fair to the other side”. For the purposes of this particular exercise, you might as well allow yourself the ego-supporting assumption that the other side are morons and if they are right about anything it is by sheer coincidence.

  • frelkins

    @Nick

    search for true reasons for changing your mind rather than reasons that are rhetorically effective

    Is this possible? How could I judge what would be a “true reason,” considering the immense and nearly overwhelming power of human rationalization? We humans are socially adapted to be Persuaders, not Informers, and it seems we are most Persuasive with ourselves.

    Or as Robin said recently: “the first step to wisdom is to realize how little we know about why we do what we do, or why we think what we think.” I could easily imagine writing out a nice 10-bullet list of facts in one direction or another, but why I accept them, I’m not sure I’m in a position to know – if I am honest with myself.

  • http://transhumangoodness.blogspot.com Roko

    I routinely use this technique in my head, i.e. from reading OB for long enough I have developed an internal “critic” who always tries to find the strongest argument against any position I support and feel emotionally good about supporting.

    I think that one has to be careful with this, though. I have often become underconfident by using this technique, and underconfidence hurts us proportionately more than overconfidence. The most pragmatically effective point on the overconfident/underconfident continuum is shifted towards the overconfident end because of the way our confidence in our beliefs motivates us, because confidence makes us better at signaling competence to others and because most mistakes in this world are recoverable,

    Most people are overconfident, so some self-doubt is good. But, to borrow some wisdom from Bill McKibben, “Some is good doesn’t mean that more is better”.

  • http://macroethics.blogspot.com nazgulnarsil

    wait everyone doesn’t do this? the first thing you should do when you reads a text that you agree almost completely with is hunt down the strongest criticisms available.

  • steven

    If you have a cherished view, why not just stop cherishing it?

  • http://silasx.blogspot.com Silas Barta

    I’m actually going through a real-life version of this. On a topic or two I won’t name (as per Nick_Bostrom’s request), I’ve been arguing with people on internet forums who held my former view and who respond with exactly what I would have said back when I believed it. Of course, in this case, I believe what I’m arguing, and the world isn’t going to end if I fail, although sometimes I act like it :-P

    It made me realize I was previously trying to subtly define my position to be correct.

  • http://www.vetta.org Shane Legg

    I’ve done something similar on a number of occasions in my research. There will be some idea that kind of annoys me, for example a ‘No Free Lunch’ theorem for learning algorithms. Part of the reason I don’t like the result will be because I disagree with the arguments for it. I then try to get at the essence of the result, and then try to come up with the best argument I can for the result, or perhaps a slightly different or more restricted version of the result that I think might be defensible. At this point it becomes like a kind of game and I, seemingly, forget much of my previous bias.

    The outcome of all this isn’t usually that I change my mind about the original result, but rather than I find something new altogether: something that retrains some of the flavour of the original result but in a new and surprising way. It’s as if by trying to spend a few days playing an honest but hard game for the other team, my thinking is sufficiently perturbed to make me see previously unnoticed things. This is probably my most productive source of interesting new research ideas.

  • http://profile.typepad.com/6p010537043dce970c/ Wei Dai

    To nazgulnarsil and Shane: Nick is talking about writing an hypothetical apostasy for one of your long-held views on some complex or fraught topic, not a new idea that you just came across and happened to agree with, or a vague annoyance you feel about a math theorem.

    I tried Nick’s suggestion, and didn’t get very far. It’s psychologically painful to question a long-held belief, and hard to obtain enough motivation to write convincingly for the opposing view. Imagining that the world’s destruction is at stake doesn’t help, since I know that in fact it’s not at stake. (Maybe if we can take advantage of the cognitive mechanisms involved in dreaming to still a temporary belief that the world’s destruction really is at stake…)

    I’d be interested to know if anyone manages to write a few paragraphs or even sentences of truly persuasive text.

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    I don’t think questioning a belief in an important part; building on opposing evidence is. When you’ve got a strong position on some topic, evidence that is dismissed by your expertise in this position may get discarded, never leaving a scratch. Instead, it should be systematically integrated, forming positive knowledge that may one day become formidable enough to engage your long-held belief.

    This exercise is about integrating evidence you are aware of, evidence that was isolated from the rest of your knowledge. The objective in not in fighting the existing belief, but in converting the isolated pieces of evidence in a form that can be more rationally perceived.

  • michael vassar

    Honestly, I don’t know where I would begin. I’m always casually seen as advocating some ism or another, often singularitarianism, but when actually engaged I pretty much always really do advocate a very nuanced and not easily categorized position (e.g. intelligence explosion as a very likely consequence of X, Y, and Z, which are all casually taken for granted by people who don’t accept the intelligence explosion but which are all non-negligibly uncertain). I mean, I was against the Taliban pretty much without qualification back in the late 1990s, but only in the sense of saying that it would be good if they were defeated, not in the sense of saying that the US should definitely remove them.

  • fishbane

    I actually use this method all the time. On everything larger than a grocery trip, I play my own devil’s advocate. On big issues, I do write it out in my journal. (When I’m dead and if anyone cares, they’ll probably think I was one of those rare split personalities.)

    This is how I talked myself down from an early belief in anarcho-capitalism. What can I say, the Cypherpunks were an easy influence in my college days, when I was impressionable.

  • Carl Shulman

    Michael,

    I don’t see why you can’t argue against nuanced positions.

  • Jeffrey Soreff

    Russell Wallace – after reading your SL4 post, and the subsequent thread, I’m still fuzzy about what the substance of your apostacy was. It looked, from the subsequnet discussion, like it was about the technical feasibility of hard takeoff – but, from your original post, I’m unsure of this. The point that you made, about 1% odds having become psychologically acceptable to you, seems orthogonal to the question of feasibility of hard takeoff. If hard takeoff was feasible, but if the necessary seed was large enough and complex enough to require multiple decades of effort, and if human civilization was under a time limit of the same order of magnitude, it seems like the same odds would apply. Am I conflating different issues, or misunderstanding what you were saying?

  • http://macroethics.blogspot.com nazgulnarsil

    does it get more cherished than the basic axioms that I use to interpret reality? the main target of your apostasy should be your epistemic system.

  • Russell Wallace

    Jeffrey – you understand correctly. I was conflating things that were logically orthogonal, because they were psychologically related. The reason I refer to it here is because I think there are two kinds of belief revision:

    1. Specific technical issue. I thought a bug was in a particular module of my code, but a test shows otherwise so I look elsewhere. That’s a simple matter of replacing X with not-X.

    2. Major, long-held belief that connects to a lot of other things. This is not just going to be a matter of replacing X with not-X. It’s going to require going right back and asking, why did I believe X in the first place? why did I not find the other side’s arguments convincing when other people do? what are the ramifications of all this?

    It’s the second kind of belief revision we’re talking about here. So the question I’m interested in is how can we make this process work, whether in writing a hypothetical apostasy or a non-hypothetical one – and whether it’s at all possible in the hypothetical case.

    That’s why I’m interested in the details of any other cases people can offer, and why I’m disappointed we don’t get to see them in Silas’s case.

  • Senthil

    I’ve not tried writing down anything but I had thought in a similar way and have changed my mind, almost taking the opposite viewpoint or something very different that the inital viewpoint is not as important for me as it looked like it was when I had spoken about it initially. But then you look like you are not that consistent. I am not sure how to avoid this. I have no qualms changing my mind but people who listen to what I had to say feel let down or cheated.

  • E

    It strikes me that it would be better to practice apologies than apostasies. Publicly rejecting one’s life philosophy (while emotionally difficult and something of an intellectual “feat”) can only happen every few years at most if it’s to be socially meaningful. By the time you’re into your third apostasy people are likely to suspect you’re not impressive for having a flexible mind… but instead are perhaps less than normally skilled at determining which beliefs are worth commitment in the first place.

    On the other hand good apologies can be done more frequently, with direct positive results for your connection to whoever is being given the apology, and with little fear of major backlash. It’s a skill worth practicing and the mental habits necessary for realizing when an apology are due are probably related to the suggested exercise.

    Perhaps apologies are not so earth shattering as a full public denunciation of your former world view, but I suspect they are substantially more likely to conduce to rather more prosaic things like happy marriages and the growing respect of one’s colleagues.

  • http://gabylewer.free.fr/freedom_apostasy Manon de Gaillande

    I tried it, and decided to publish the result. The idea I tried to challenge was “I really want to steer my own future”; I made a case for the state of the world depending on your actions, but your decisions getting made by something other than you that shares your values (it may still feel like from inside they’re your own decisions for Fun reasons, but you’re allowed to know the truth).

    It showed me I’m confused about freedom, which is, er, a property the algorithm that returns which primitive reachable action will be taken may or may not have. My values will change because of it, but I have no idea how yet.

    The text I wrote is at the linked URL (utf8, 5kB). It may be gibberish, because I didn’t phrase it to be clear to anyone but myself, and because I take for granted points I’ve decided on but other people disagree about.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Would someone with political ambitions be wise to avoid this approach, for fear critics would someday find and quote his reversed-conclusion essay?

  • http://rhollerith.com/blog Richard Hollerith

    Exercises like the one described in the OP can be hard on your health, specifically your immune system. I have a theory that certain social situations have a natural human tendency to suppress immune function and that that tendency exists because in the EEA, the suppression caused difficult-to-fake signs of ill health or lack of prosperity, and the presence of those signs reduced the likelihood of the individual being killed by other members of the group.

    Specifically, I believe that in the EEA, the more directly an individual is in competition with you (for status or sexual partners) the more likely for that individual to end up killing you (though of course the rivalry is probably not the explanation his language-producing brain regions will use to justify the killing). Showing signs of poor health will tend to cause a rival to perceive you as less of a threat, so he will be less likely to kill you.

    In the EEA, anytime you find yourself questioning basic moral tenets is probably a time in which you are at high risk of being killed (because being perceived as moral by the others in your group was so essential to survival)

    (It should go without saying that the rate of homicide in the EEA was much higher than it is now.)

    Disclaimer: the above is my own theory; I have never seen any evolutionary psychologist advance it. (I have though had a conversation with a doctor in whose practice the state of the immune system of the patient is a central ongoing concern who told me that changes in the individual’s perception of his social role or position is an important cause of changes in the state of that individual’s immune system.)

    My reason for writing the above is support the following humble suggestion to people planning to undergo strenuous sustained “crisis of faith” exercises: for the sake of protecting your health, you might want to postpone it till a time when you are not under any other kind of influence which would also tend to supress your immune system, such as relocating to a new city, getting divorced or feeling sick due to exposure to a disease-causing organism or even a toxic chemical (the latter being an important cause of immune suppression in this day and age IHMO).

    Disclaimer: I am not a health-care professional.

  • Jeffrey Soreff

    Russell, many thanks.

  • Russell Wallace

    Manon – I found your writing perfectly clear, and it’s exactly what I was looking for, thanks!

    And if I understand correctly, it’s a counterexample to my conjecture that this sort of thing can’t be done hypothetically.

  • http://profile.typekey.com/nickbostrom/ Nick Bostrom

    Robin: You could destroy the text. But it might be that people with political ambitions are not well advised in general to strive too hard for rationality, honesty, and self-understanding.

    Michael Vassar: I was not mainly thinking of the case where you believe (specific proposition) P and you must now argue for not-P. Rather I had in mind that there is some “issue” and you now have some “take” on it; and your task is to argue that this take is inadequate. I don’t know if this is a useful exersise for you (or anybody else). Maybe you could imagine that you are twice as old as you are now and much wiser, and you write a page pointing out the naivite of some part of your current outlook.

    One fun habit to cultivate is to formulate your own views (in your own mind) in the terms that a clever but utterly dismissive and slightly unfair opponent would use in a debate. This technique is not meant to change your beliefs in any particular direction, nor to make you more uncertain, but simply to create some psychological distance between you and your beliefs. Such detachment might make belief revision easier.

  • (Hope)Fully Rational

    I aspire to be fully rational which means I don’t hold any views that aren’t supported by facts verified by experiment. Care to share some cherished beliefs that I could challenge? I honestly have none; I have no beliefs except ones that science has provided. My mind is otherwise as squeaky clean as it was when emerged from the big Zero.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    “I don’t hold any views that aren’t supported by facts verified by experiment”

    I’d like to believe you on this, but I can’t until I see a journal citation.

    Next!

  • mjgeddes

    My first ‘take’ on coming to transhumanist lists was ‘Transhumanists are hyper-rational’ My (current) changed ‘take’ is ‘These people are actually no more rational than average’.

    More seriously, my views on both Bayes ) and economics have definitely changed. Initially, the ‘big issue claims’ I blindly believed were:

    ‘Bayes is the secret to the universe’
    ‘Libertarianism is the best political system’

    Composing Q&A (Questions and Answer) lists can be helpful because you have to write down lists of objections and then the counter-arguments. This is how I talked myself out of Libertarianism for example.

    As to Bayes, about 2 years ago I composed a gem of a philosophical argument which left me gobsmacked, and immediately I had convinced myself that Bayesian Induction is merely a special case of Analogy Formation.

    Here is a less than 100-word essay that would persuade my past self of the debating proposition that: ‘Bayes is not the ultimate foundation of logic, but analogy formation is’:

    ‘Take the famous design adage: Form follows Function and Function follows Interface. Now try something original: Apply it to abstract concepts! Then we have: Geometric Solids are the Forms of Physics, Forces are the Functions of Physics and Fields are the Interfaces of Physics. Another example, this time applied to pure math concepts: Permutations are the Forms of Math, Relations are the Functions of Math and Sets are the Interfaces of Math. In Logic: Deductions are the Forms of Logic, Inductions (Bayes) are the Functions of Logic, and Analogies are the Interfaces of Logic’

    A brilliant proof of the primacy of analogy formation or (knowing the famous hopeless Geddes luck), yet another dead-end? ;)

  • http://macroethics.blogspot.com nazgulnarsil

    glibness aside, Hopefully Rational:
    you can question your own criterion for deciding on the veracity of experiment. you can also question the criterion by which you decide to go hunting for this or that bit of experimental evidence, as I doubt your days are filled with reading research papers without filtering.

    everything collapses back to the purpose of this blog: examining the filter you apply to reality.

  • http://profile.typekey.com/1227585392s27146/ Philip Goetz

    Debate teams and moot courts are exercises like this. I don’t think they increase the level of rationality. They teach people to use tricks of persuasion without caring about truth.

  • Ben Jones

    Good piece, reminded me of this.

  • http://www.vetta.org Shane Legg

    Wei:

    No, I’m not talking about a “vague annoyance”, as you put it. In one instance it’s a collection of results that large numbers of researchers cite and use to justify their work, and that I (and some others) think are deeply misleading.

  • http://fourcultures.wordpress.com Fourcultures

    Grid-Group cultural theory might be a way into this de-biasing process, a way of ‘trying not to fool yourself’. It proposes four rival rationalities, not merely one, which are all ‘viable’ on their own terms. Most of our thoughts and many of our social contexts are framed by just one of these rationalities, but the theory helps show how there are more. In other words, the theory offers the contours of not one ‘hypothetical apostacy’ but three (your own view plus the three that define it by differing). Faced with a problem or an issue it can be reframed according to the four cultures. This can throw up new ways of looking at the issue, and new solutions.
    It shows, for example, that ‘saving the world from destruction’ is a very ‘Egalitarian’ problem/solution, that probably wouldn’t motivate everyone.
    For those that aren’t taken with grid-group analysis, a similar thing can be done using Fiske’s Relational Models Theory, or even Lacan’s ‘four discourses’. These may be instances of a more fundamental social structuration which tacitly informs very many models of social science, as well as everyday social organisation.
    But as ever, there’s a recursivity to these exercises: “I think Grid-group cultural theory is in error, and here are the four ways it is wrong…”
    The Fourcultures website has plenty of examples of the approach in action.

  • Cameron Taylor

    Just find someone whose views you normally hold in contempt that happens to be arguing for ‘your side’ of an issue. Makes all sort of apostasy spring to mind without any effort whatsoever. Works for me.

  • mjgeddes

    ‘The Ontological Conspiracy’,
    by Marc Geddes

    Here’s a big exercise to try to ‘snap’ readers out of all their biases, and induce multiple ‘crises of faith’ ; in fact the more the reader thinks they know, the more the reader is likely to be shocked after pondering my little tables.

    This is the skeleton outline of the ontology which ‘carves reality at its joints’, by classifying fields (domains) of knowledge in the most natural way possible. The exercise for readers is to try to form as many analogies as possible between knowledge domains, in order to see things in ‘new ways’ and thus questions prevailing beliefs.

    Reality

    …………….Physics………………….Teleology…………….Mathematics

    Universal…Laws of Physics………Moral Archetypes…Pure Math Forms
    System……Applied Physics………Psychology………….Logic
    Object…….Inventions………………Culture……………….Software

    Universal Level

    ……………Laws of Physics………Moral Archetypes….Pure Math Forms

    Interface.. Field Theory……………Aesthetics……………Set Theory
    Function….Mechanics……………..Consequentialism……Algebra
    Structure….Geometry……………..Virtue Ethics…………Combinatorics

    System Level

    …………….Applied Physics………Psychology………Logic

    Interface….Networking…………..Communication……Analogy Formation
    Function…..Thermodynamics…….Decision Theory….Bayesian Induction
    Structure….Chemistry……………..Sociology………….Deduction

    —-

    Object Level

    …………….Inventions……………..Culture…………….Software

    Interface…Virtual Reality………..Art…………………..Ontology
    Function….Engineering……………Morals……………Object Oriented Programming
    Structure….Nano-Tech…………..Language…………Operating Systems

    —-

    The Questioning Exercise

    Use the tables by by forming analogies between knowledge domains in corresponding table positions. For example, in the ‘Universal Level’ table, the cell at the (Moral Achetypes,Function) position is ‘Consequentialism’. In the ‘System Level’ table, the cell at the corresponding position (Psychology, Function) is ‘Decsion Theory’. So ‘Consequentalism’ maps to ‘Decision Theory’. When the concepts in all the given knowledge domains are carefully examined and as many analogies as possible are made between the’ triples’ of domains at corresponding cell positions, at a critical threshold of knowledge new ways of seeing may emerge.

    One critical piece of information is required to ‘unlock’ the tables: This is that the knowledge domains exist in a hierarchy, as follows:

    Most Abstract>>>>>>>>Least Abstract

    Universal>>>>System>>>Object
    Mathematics>>Teleology>>Physics
    Interface>>>>Function>>>Structure

    The more abstract domains super-cede (include) the less abstract ones, as for example, the outer part of an onion wraps the inner layers. For instance, the tables indicate that ‘Analogy Formation’ is more general than ‘Bayesian Induction’ since ‘Analogy Formation’ is at a higher sub-level on the ‘System’ level. And so on.

    Free Your Mind

    As Morpheus said to Neo in ‘The Matrix’…

    FREE…
    YOUR….
    MIND!

  • Anna Salamon

    I just tried this exercise. I found it helpful in letting go of various beliefs and attachments, and would recommend it to others. I started by leaving myself a line of retreat, then attempted an all-out ad hominem attack on my current beliefs, while emphasizing my ability to change.

    I tried a similar exercise a year ago but kept flinching away; I guess it’s progress that I’m no longer prohibitively afraid of this sort of thing.

  • Will Newsome

    I did this and found it useful. It was much easier to write when I realized I should be attacking my algorithm for finding truth and not any of my current beliefs, which I am unlikely to be able to attack in a convincing manner. I expect this would be true for most people.

  • Charlie

    Hmm, I’m finding this quite difficult. The only one I can think of that come close to working is politics-related, which is crucial because it allows vague claims to sound convincing, initially at least. My most-rehearsed issues are scientific, which makes them much more difficult to make vague claims about.

    Perhaps actually writing it down is a crucial part of this technique, related to the power of masks. But the power of masks doesn’t care whether something is right or wrong, and it doesn’t go away so easily once the exercise is over. I don’t want to try this anymore, actually.

    On the other hand, I could write a killer argument to my past self as to why I should totally write it out!

  • Lenoxuss

    I deeply, passionately believe that the world should not be destroyed.

    Damn.

    (I suppose Fourcultures made a similar point about recursivity — but mine is more explicitly in joke form, so there.)

  • http://www.endecoterrorism.com/ kujirakira

    I tried this in regards to the Sea Shepherd Conservation Society.
    This is the best I could come up with in defense of their violence and hate rhetoric…

    SS aren’t really bigots or racists at all. When they say “jap” or go on a rant bashing Japan, despite never even visiting the place, they’re really just venting their frustrations because they hate their own lives.

    Besides, those yellow squinty eyed small penis bastards have it coming to them for raping the oceans, molesting dolphins, getting their jollies by committing genocide against whales, and of course we can’t forget all that sick shit they did in WWII that has nothing to do with anything. japs are a fucked up society anyway and we should nuke them a few more times just for good measure.

  • Johnny Szmyd

    I’ll write one if Barack Obama and each member of his staff writes one. Also, they must all be published in the NY Times, the L.A. Times, and the Miami Herald.

  • Pingback: Deep Beauty: An Epistemological Journey to the Center of the Universe

  • Pingback: Hypothetical Apostasy on Nutrition

  • Pingback: A critique of effective altruism | The Effective Altruism Blog

  • Pingback: Two questions you won't want to ask yourself but should | 80,000 Hours