Open Thread

Better late than never, this is our monthly place to discuss relevant topics that have not appeared in recent posts.

GD Star Rating
a WordPress rating system
Tagged as:
Trackback URL:
  • Nicholas Walker

    Assertions: sinful things almost always cost more than clean, simple ones; people pursue wealth explicitly to purchase sinful things since they cost more.

    ——————————

    What is the point of wealth, what does it buy: freedom to sin or freedom to live a clean, gracious life of virtue?

    Our society is screwed up because it is difficult to sin, and therefore
    worthy. Sinful things cost more than honest, simple ones. A prominent, prideful uncomfortable property everyone can see is more expensive than a comfortable
    one away from view. Taking a date to an ice cream shop costs $5 whereas
    taking a date to a trendy night club can cost $100, or a lot more.

    Some of this is supply and demand: you can enjoy clean, simple things everywhere but sin is sequestered, leading to scarcity.

    Also, some virtuous things cost more. Systematically, I’d say the following:

    A. paying more now to pay less later is virtuous (land above sea level in Manhattan)

    B. paying more now to pay more later is sinful (needing surgery to correct excessive drinking and smoking)

    The ancients felt a man that didn’t need to lie, didn’t need to be mean, or cheat, or resort to usury was nobler than one that had to.

    Society has tried to make virtue worthy, in the past, by restricting high office and certain status to clean individuals, but many today opt out thinking public office is either out of reach, boring, or exclusive to people of a different sex, race, religion, or income.

    Conclusion: we won’t fix many of our problems until we find a way to make virtue worthy again for all classes of society.

    • Petar Subotic

      Insert a reply that points out solution lies within contradicting such rigid assertions here. Debate follows, world is fixed. Hooray!

    • http://profiles.google.com/prakash.chandrashekar Prakash Chandrashekar

      Donate to SENS. The long run effects of negligible senescence will make virtuous behaviour worthwhile, in the long run. 

  • Late to the party

    I was reading Walter Lippmann’s “Public opinion” and these short sentences reminded me of  Robin’s “Culture is Far” post.
    “For the self which takes charge of the instincts when we are thinking about (…) distant acres is very different from the self which appears when we are thinking even potentially as the outraged head of a family. In one case the private feeling which enters into the opinion is tepid, in the other, red hot. (…) control of cravings is fixed not in relation to the whole person all the time, but more or less in respect to his various selves. There are things he will not do as a patriot that he will do when he is not thinking of himself as a patriot.”

  • http://juridicalcoherence.blogspot.com/ srdiamond

    For Robin in his endeavor to become a book author: a novel writing principle derived from construal-level theory.

    Emphasis by brevity of sentences, paragraphs, and sections — http://tinyurl.com/al8ruo5

  • Ely Spears

    Why do we tend to only write to-do lists for near-mode goals? Get certain groceries, write certain unit tests, make certain appointments, etc.

    The sum total of all these crossed-off near-mode list items yields our ultimate far-mode legacies, but how correlated are these eventual far mode accomplishments with the ideal far mode legacy we wanted to have back when we were writing the to-do list?

    We often are advised to think of our far-mode and long term goals, then draft up more bite-sized actions that, when chained together, will achieve the longer term goal. But my experience is that even though this sounds good, people don’t like to actually turn far mode goals into a chain of near mode items.

    This could just be laziness or other human vices. But I am interested in any other speculative explanations for why we avoid far mode to-do lists.

    • Robert Koslover

      I think we avoid far-mode to-do lists because we literally don’t need them.  Consider an example (just an example) of such a list:
      1.  Go to college, work hard, and obtain a meaningful degree.
      2.  Find and marry a beautiful and sensible person.
      3.  Have two children and be a good parent to them.
      4.  Save for retirement.

      Now, who really needs to write down that sort of thing?  Can you honestly say that you might otherwise fail to remember such plans?

      • Ely Spears

        So your claim is that to do lists function only as reminders. I tend to use them as a way to break up a problem into easier-to-manage tasks that, conditionally chained together, improve my odds of achieving the bigger goal.

        So a to do list for the goal of getting a bigger annual work bonus might include hundreds of items, and be amended dozens of times. If I made a similar list for affording home ownership quickly in my life (I’m young with savings not yet on home ownership level), it might contain hundreds of items as well, ranging from learning about real estate law and taxes to methods of forecasting home prices in areas I care about.

        Yes my plan would be amended a lot, but maybe I would own more or better property (if this was a main goal of mine) than I otherwise would if I merely worked hard and assumed the chips would fall right for me to own property one day.

        So I think there are many plausible benefits to viewing your long term life goals as a (best guess that gets often amended) list of many many concrete steps.

        There’s obviously some complexity trade off. I can’t spend too much time maintaining lists and progress measures. But that doesn’t seem to explain the almost zero attention paid to itemizing far mode goals.

        It seems more likely that there’s something else. Here’s one guess: it makes it harder for us to view ourselves in grand ways that exemplify or social norms if we are just “crossing items off a list.”

        Being a good person, raising children successfully, “making a difference”, inducing a paradigm shift. It seems like we more just hope we achieve these things than that we ever plan to. Then we rationalize that whatever we did achieve, to some extent, is grand.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Many self-help gurus do stress explicit long-term goals. One reason most people ignore this advice is that the process of writing a list is itself inherently near-mode.

        But the question is whether it really helps. The processes by which far-mode affects near-mode thinking is mostly unconscious. (http://tinyurl.com/7d2yh6x ) Being too managerial about pursuing long-term goals may be self-defeating, for the same reason conscious attention to other unconscious processes is.

        The main problem for far-mode goals is coming really to want to achieve them rather than delineating them precisely.

    • Drewfus

      Is present-mode both real and distinct from near-mode?

      The reason i ask; the symbolization of our desires is known to enhance future-orientation of decision making and ability for delayed gratification. Example paper: http://academic.research.microsoft.com/Paper/5284973What would be the affect of symbolizing present-mode goals by writing them as near-mode lists?

      * Smoke cigarette at 10.30am* Drink coffee at 2.00pm* Eat ice-cream at 8.30pm* Drink whiskey at 10.00pm

      Near-mode planning of ‘vice’ consumption might make these things less interesting due to a partial shifting from concrete to symbolic form.

    • Andreas Moser

      I don’t write any to-do lists. I just DO things: http://andreasmoser.wordpress.com/2011/12/30/new-years-resolutions/ 

      • Ely Spears

        It seems like you made a pretty detailed list at the bottom. FWIW (not much I suppose) I think most of those bulleted items are really bad ways to try to achieve goals, and if you look at professional literature on, e.g. quitting smoking, going cold turkey has a poor track record. And going to find a random homeless guy near you to help just because you want to help the poor is pretty inefficient. These are exactly the kinds of problems where some patient to-do list style restraint would be hugely beneficial. In fact, it is because most people try to do it in these ways that they are not successful in achieving the longer term goals.

  • Drewfus

    I have a semi-formed notion of what a ceremony is.
    —-
    A culturally learnt act involving multiple individuals has two components:

    1. A grammatical component. The structure and sequencing of the act. Who acts, what is acted and the relative ordering of each individuals actions. Grammer can hint at meaning due to pattern association.

    2. A semantic component. The real meaning of what is acted.

    Understanding a cultural act involves grasping each component, perhaps both as seperate entities, and/or as unified concept. The grammatical and semantic components are normally ‘cognitively complementary’ – so the cultural act or scene is understood effortlessly and without recourse to a culturally sourced interpretation.

    Not so in the case of a ceremony. In a ceremony, the grammatical and semantic components are contradictory. The brain grasps the cultural act in two ways – incompatible with each other. The concious/executive part of the neo-cortex wants a single interpretation however, so cognitive rivalry occurs – similar in concept to binocular rivalry http://en.wikipedia.org/wiki/Binocular_rivalry This interpretation rivalry is avoided by culturally available stories which help the brain ‘make sense’ of what is otherwise ambiguous.

    Under http://www.overcomingbias.com/2012/11/zitzewitz-the-wise.html i commented:

    The ceremony of gift giving is an opportunity for the recipient to show they care about the giver, by faking a positive emotional reaction to and gratitude for the givers gift. The faking is understood by both parties as being a fundamental part of the ceremony, as is the relative worthlessness of the gift to the recipient, which precludes the recipients reaction from being a genuine one.

    Grammatically, the actor handing over the purchased item is the giver, and the taker of the item is the receiver. Semantically, the actor who performs “the act that counts” is the giver, and the intended observer of the act is the receiver. Ready-to-hand interpretations, conveyed culturally, bias our understanding in the desired direction so that the real meaning of gift giving stays out of our conciousness.

    The concept of ceremony might extend beyond what we explicitly refer to as ceremony, as might my interpretation of it.

  • Drewfus

    Michael Mosley has done another great science doc series for the BBC. In the first episode ‘Into the Mind: Emotions’ http://www.sbs.com.au/documentary/video/2308028642/Into-The-Mind-S1-Ep1-Emotions Mosley talks to psychologist Lynn Rosenbaum, who worked with Harry Harlow http://en.wikipedia.org/wiki/Harry_Harlow Harlow later conducted experiments (1960s) on monkeys, to determine the effects of long-term isolation and confinement. Rosenbaum defends all but the most severe of these experiments, based on the value of building “a monkey model of depression”. Mosley sums up the episode by saying that the experiments carried out by Harlow, and the fear experiments of John Watson on Little Albert (also covered), would be impossible to carry out today for ethical and legal reasons, and nor would he ever allow such experiments to be carried out on his own children (who we see). However, Mosley states “… but do i think it was worthwhile in the end? Yes, i do. I’m glad it was done. I do believe the knowledge that was gained was worth the price.”

    Another example of terrible hypocrisy?

    So what do we say of those who advocate the banning of experiments that have potentially long-term psych consequences for the participants, even if the data could help with huge issues such as depression, and even if the participants are not humans? Who benefits most from these bans, and who pays the price?

    My definition of the banning situation:

    Experimental restrictions based on ethical doctrine are social luxuries conveyed in a manner that makes current-era decision makers appear morally superior to their predecessors – this made feasible because the cost of the luxury impacts broadly and hypothetically.

    In short, the non-hypocrites are signallers instead.

  • http://juridicalcoherence.blogspot.com/ srdiamond

    Against Voting for Comments
    The karma system destroyed LW.
    It promotes ideological signaling (I use “ideological” broadly, to include what passes for “rationalism.”) It seems inevitable that people vote primarily based on agreement rather than quality, at least when voting is anonymous. (It’s taken for granted that vegans upvoted a criticism of Carl Shulman’s recent posting.)

    The voting system on OB is much less pernicious than LW’s. Get rid of it. (Robin, enough of your loss-aversion and endowment-effect-driven tendencies!)

    • dmytryl

      The down voting in systems is particularly obnoxious: few frequent posters just vote on everyone, attaining disproportionate ‘power’ (especially as any comments at minus are read much more uncharitably and are strawman’d much more as well). It’s easy to notice if you create pseudonym account under which you present the same views that were previously down voted – the group of frequent posters and self appointed internet police will for a while not track the new account, and it will accumulate positive karma if the community in general agrees with the views.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Yes, but let me clarify my point, which on rereading isn’t completely clear. Despite the fact that LW’s system is more obnoxious, OB’s is counterproductive too–for the same reasons, only it’s a smaller effect, hence harder to appreciate.

      • dmytryl

        It might be.

        Though, LW is a very special case. At the end of the day, it’s the place where Yudkowsky separates gullible from their money.

      • VV

         An upvote-only system doesn’t seem to be particularly problematic: if you agree with something, clicking “like” is more efficient and generates less clutter than writing a comment just for applause, although I prefer named “likes” as in Facebook.

        The LW’s system with unlimited anonymous downvotes, downvoted comment collapsing and reply penality (and ban, if you have negative karma), is clearly designed to enforce conformity.

      • dEMOCRATIC_cENTRALIST

        if you agree with something, clicking “like” is more efficient and generates less clutter than writing a comment just for applause,

        The problem I see is that it encourages hypocrisy. Upvoting is ambiguous and is offered (Robin proposed it so) as a measure of quality, not of agreement. Writing a comment for applause is a bad practice, but do intellectual readers require substitute gratification?

        The ultimate hypocrisy to which Upvoting leads in fact is shown on LW. There, posters are showered with karma for making financial contributions. So, voting is supposed to reward quality contributions–but awarding karma for announcements of contribution belie the claim so grossly that it’s repugnant to witness the Indulgence selling in which this new religion engages.

      • Anonymous

        “At the end of the day, it’s the place where Yudkowsky separates gullible from their money.”

        That doesn’t explain why they’ve hired *additional* staff in recent years…  Take gwern, for instance.  He’s a remote researcher, so there’s no way he could be some kind of crony in real life.  And he was posting on LW forever before he got hired, and his website gwern.net lists him as a researcher for them.  So this is pretty much completely incompatible with the “ZOMG EY = EVIL” narrative.

      • dmytryl

        Anonymous:There may be zillion of reasons for hiring more staff, e.g. it was reasoned that having more staff that writes LW articles will long term increase income. The question is of donors though: you got to be gullible to believe people with no actual background of any kind are the best option (compared to other options including money in bank & donate to more credible group if such shows up).

      • gwern0

        > It’s easy to notice if you create pseudonym account under which you
        present the same views that were previously down voted – the group of
        frequent posters and self appointed internet police will for a while not
        track the new account, and it will accumulate positive karma if the
        community in general agrees with the views.

        That’s not what I recall you saying after your last sockpuppet was unveiled.

        And I’d note there’s a less flattering explanation: new accounts are simply treated with some forbearance.

    • Drewfus

      The alternative to ideologically motivated voting, is more ideologically motivated commenting.

      The demand to signal ideology is not eliminated by regulating it – that only changes its mode of expression. The voting system is a useful drain.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        That’s strange, since you’ve said earlier that anonymous voting is obviously not a signaling method–just because it’s anonymous.

        First, anonymous voting isn’t signaling, so it doesn’t drain signaling. Second, voting reinforces ideological signaling. Can there be any doubt?

        You seem to have a drive theory of signaling, which doesn’t take account of how signaling is primarily a response to opportunity.

      • Drewfus

        A drain is not an opportunity. Relief is not expression.

        “You seem to have a drive theory of signaling, which doesn’t take account of how signaling is primarily a response to opportunity.”

        Signalling is what we do to appear visible while intending to be anonymous. ‘Outing’ a signaller amounts to falling for the trap. How much this behavior is drive versus opportunity motivated, i’m not sure, but i think it most likely exploits the region of our cognitive limits, including that of the ‘signaller’ – part of this being the capacity for self-deceit.

        Seriously, it is hard to become fully aware of our most socially intelligent behavior. If it were relatively easy, we would surely be capable of even more devious and cunning behavior than that which we readily understand. It becomes a chicken and egg problem, except what is interesting is not what comes first, but what comes last.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Drewfus,

        To speak near-mode for the moment, it seems clear to me that one effect of voting on posts, given that voting is mostly ideological, is that it reinforces ideological signaling. You give the right signal and get upvoted, then signaling is being rewarded above and beyond its usual due. It’s ordinary reinforcement learning.

        So what would create a greater countervailing effect? Perhaps people have a certain drive to assert their ideologies, and this can take place through either voting or commenting. So, when a commenter criticized the OP by signaling “animal rights,” if the five agreeing posters hadn’t had the opportunity to express their ideology by upvoting, they would have been more likely to do the same by commenting.

        I would bet on the first; you on the second (if I understand correctly). 

      • Drewfus

        “it seems” is subjective.

        “You give the right signal and get upvoted, then signaling is being rewarded…”

        A rewarder has higher status than a rewardee, so in this model you can grant yourself higher status.

        A peer model; when we see something we agree with and/or that supports our worldview, we feel vindicated in affirming our identities (in this case by voting). More broadly, being confident in this model would mean a general sense of feeling that our identities are socially validated.

        “I would bet on the first; you on the second (if I understand correctly).”

        By the first, you mean that the non-possibility of voting would not increase commenting (but would block ideological signalling)? Well that might be the fundamental difference between Left and Right – the Left sees bad behavior as being rooted in social opportunities that, when eliminated, take the bad behavior with it, whereas the Right sees the propensity for bad behavior remaining, after specific opportunities for bad behavior are removed. Example:

        Left: Removing slot (poker) machines would eliminate all the problem gambling associated with these machines.

        Right: Removing slot (poker) machines might help a few problem gamblers, at least in the short-term, but it does nothing to reduce risk-seeking behavior, which exists independently to any technology and probably to society.

        I would agree mostly with the second view, although risk-seeking behavior probably has a (low) status element to it – so partly social in that sense – but also that some of the really bad cases of gambling addiction would be limited if some of the more mindless forms of gambling were eliminated. However, another component of the status relation to gambling regards customer service – being served at your seat by pretty girls and handsome men must be a considerable ‘status trip’ for many, so instead of restrictions and regulations on the machines, i would advocate making people go to the bar for their drinks and payouts, to remove that opportunity of feeling special.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Removing slot (poker) machines might help a few problem gamblers, at least in the short-term, but it does nothing to reduce risk-seeking behavior, which exists independently to any technology and probably to society.

        Well, you could help direct risk-seeking into more socially desirable directions. For example, people might play the prediction markets instead of the slot machines.

        On the general point, are you saying that it makes no difference whether there’s voting or not? Or that voting is helpful? What about having two votes, on ideology and on quality? Would that present more or less catharsis than one vote? Why is the status quo just right? Or does it just not matter? (which I think is the implication of your argument).

        On definitions, you aren’t defining the “Right” but libertarianism . The conservative right (and especially the fascists) are more “paternalist” than the left. 

      • Drewfus

        “For example, people might play the prediction markets instead of the slot machines.”

        RH wants experts involved in prediction markets, not ‘gamblers’.

        Re voting: It would be irritating for a commenter to see their comments regularly up-voted for remarks that amount to ‘taking sides’, but ignored for comments that are more original or speculative. Maybe ‘interesting’ would be a useful category, but then if you find a comment interesting, participation by commenting would be preferable to merely voting. On the other hand, RH might not want to encourage freethinkers – http://www.overcomingbias.com/2012/08/shoo-freethinkers.html

        Maybe voting encourages commenting, so in that sense it might be useful in getting marginally motivated readers to comment, but then comments that support commonly held positions or beliefs are much more likely to get up-voted, rather than commenters own ideas. That creates a bias – something we are trying to overcome.

        “On definitions, you aren’t defining the “Right” but libertarianism . The conservative right (and especially the fascists) are more “paternalist” than the left. “

        That just means the conservative Right are to the left of Libertarians. Last time you told me that the essence of being Right is – defending inequality. So why be paternalistic about problem gambling rather than defending gambling industry profits? Also, i’ll put it to you that the equating of Conservatism with Fascism with Libertarianism is politically motivated, designed to harm the reputations of those who support political and economic freedom by associating them with those who do not, self-justified by references to the Right’s supposed defence of higher socio-economic classes, in spite of:

        1. Understanding society though the framework of classes is a Left-wing trait – the Right has no classes to defend because the concept of social classes is not essential to Right-wing thought.

        2. The Right is convinced that Capitalism benefits all social demographics approximately equally. Capitalism works for society is the Right’s position, not that it works for “the rich” – so there are no arbitarily favored social groups to defend.

        3. The greatest specific example of ‘defending inequality’ in world history was TARP. The Democrats voted for it, the Republicans against.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        RH wants experts involved in prediction markets, not ‘gamblers’.

        Rubes help grease the wheels of the gambling industry. You need money in the pot to motivate the experts.

        The greatest specific example of ‘defending inequality’ in world history was TARP. The Democrats voted for it, the Republicans against.

        The Democrats aren’t “left.” They’re basically pseudo-left (although I’ll grant they are marginally left of the Republicans overall). I opposed TARP from the left, as did almost all the far left. But the “greatest defense of inequality of all time?” I don’t think so. If you really want to know, the greatest victory for inequality the world has known was the fall of the Soviet Union.

        That just means the conservative Right are to the left of Libertarians. Last time you told me that the essence of being Right is – defending inequality. So why be paternalistic about problem gambling rather than defending gambling industry profits?

        You can invent your own neologisms, but left and right have a history dating from the French Revolution. It isn’t that subject to ideological redefinition.

        Why does most of the right want to suppress gambling? Because debauched workers aren’t good employees. And, secondarily, to avoid wrecking the myth that people make money because they’re better rather than because they’re lucky.

        I’m not for suppressing gambling. Gambling is in part an aspect of human nature; in another part, an outcome of economic desperation. In neither case is police action the answer.

      • Drewfus

        “Rubes help grease the wheels of the gambling industry.”

        An important difference between risky investments and straight out gambling is the betting rate. The betting rate declines in the order; slot machines > horses > stocks > houses. Prediction market betting i’d guess might have an average rate just a bit higher than stocks, at most.

        “The Democrats aren’t “left.” They’re basically pseudo-left (although I’ll grant they are marginally left of the Republicans overall).”

        The Democrats supported the bill based on a fear or belief in systemic risk – a type of market failure. The Republicans rejected this, and given that most of the systemic risk claims came from those who would benefit most from the passing of the bill, and that the vast majority of the public disapproved, was a great credit to them.

        Define ‘pseudo-left’ – or does ‘pseudo’ refer to the stuff you’re embarrassed about?

        “I opposed TARP from the left, as did almost all the far left.”

        Interesting. I would have guessed you had supported it as an example of the Left ‘saving Capitalism’. Why did you feel the need to oppose from the Left, rather than on (lack of) merit? Is your political identity more important to you than good public policy? Reminds me of that well known Liberal figure who recently came out in favor of abortion, but was desperate to explain to everyone how this could still be understood as being pro-Left-wing. Why did she care so much about her identification with a political abstraction? Something to do with purity, perhaps?

        “If you really want to know, the greatest victory for inequality the world has known was the fall of the Soviet Union.”

        I see, although there is much more intentionality regarding TARP and its outcomes than there is in the fall of the Soviet Union, which was relatively accidental.

        “You can invent your own neologisms, but left and right have a history dating from the French Revolution. It isn’t that subject to ideological redefinition.”

        Okay, so Left means radical, and Right means royalist. Sounds very appropriate for today’s world! Are you saying that the authority for defining labels such as Left and Right belongs to political science academics? Aren’t they mostly of the Left?

        “Why does most of the right want to suppress gambling? Because debauched workers aren’t good employees.”

        So without the paternalism of the conservative Right, the entire employee class would be addicted gamblers? Except that gambling is not ‘suppressed’, and the average worker is not debaunched, so why make the claim?

        “And, secondarily, to avoid wrecking the myth that people make money because they’re better rather than because they’re lucky.”

        Right in the Goldilocks zone. Everyone is gullible enough to fall for the Capitalist propaganda, but has just enough nous to be suspicious of the meaning of gambling industry profits, but also too lacking in self-control to avoid gambling addiction. Best then to suppress the industry and kill two birds with one stone. Make everyone (other than the political masters) a bit smarter or a bit dumber, and your depressing worldview collapses.

        “I’m not for suppressing gambling. Gambling is in part an aspect of human nature; in another part, an outcome of economic desperation. In neither case is police action the answer.”

        Saying things like could get you labelled a closet right-winger!

        The truth is that there are people on both the Right and the Left that want to minimize access to gambling opportunities, because of the harm done by addicted gambling. There is genuine concern, not conspiracy.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Saying things like could get you labelled a closet right-winger!

        You really have no understanding of the far left. Maybe there isn’t any in Australia; it’s a pretty rightist place, isn’t it (although you do at least have a nominal labor party)?

        If one thinks the state is run by capitalists in their own interest, you aren’t likely to call upon that state to mount repression. I doubt you could find a far leftist who disagrees with me that we don’t support the state’s effort to impose “morality” on ordinary people. For example, opposition to antidrug laws is the norm.

        The truth is that there are people on both the Right and the Left that want to minimize access to gambling opportunities, because of the harm done by addicted gambling. There is genuine concern, not conspiracy.

        Then by your definitions, Left is Right. Welcome, Orwell! (You had framed the distinction as situation versus human nature. Opportunities don’t count for much, you said.)

         

  • http://juridicalcoherence.blogspot.com/ srdiamond

    The Future of Humanity Institute and GMC are said to be “sponsoring organizations” of OB? What does that mean? Do they contribute money, endorse the content, lend their name to give the site respectability, or what?

    (Secondary question: What is a tenured associate professor with two sponsors doing asking the general public for financial contributions. This seems almost as tasteless as the Singularity Institute milking people while Yudkowsky makes a hundred forty thousand dollars a year (at least public report responding to questions on LW.)

    I ask in part because it took me a few years of lurking on LW (OK, I’m obtuse) to realize that LW was a front for the Singularity Institute–something LukeProg is at least being open about now. Still more time to realize the Singularity Institute was a front for Peter Thiel.)

    • http://overcomingbias.com RobinHanson

      Neither FHI nor GMU pays me to write this blog. University professors ask for public donations to support research all the time.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Thanks for the response.

      • VV

        Neither FHI nor GMU pays me to write this blog.

        So their sponsorship consists in a mere endorsement?

    • dmytryl

      Wow, Yudkowsky makes this much?

      Some people really are naive. I mean, the baseline probability of psychopathy is what, 3% or so? Doesn’t take much weirdness associated with psychopathy to get into >50%.

      • VV

         According to a quote in this list: http://kruel.co/2012/05/13/eliezer-yudkowsky-quotes/ Yudkowsky urges people to donate a substantial fraction of their discretionary income to his org because their mission is so much important that every dollar sent to them could make the difference between the future where all our species is wiped by evil robots and the one where we become an intergalactic civilization (seriously)

        In his words: “I think that if you’re actually just going to sort of confront it,
        rationally, full-on, then you can’t really justify trading off any part
        of that intergalactic civilization for any intrinsic thing that you
        could get nowadays”

        I suppose that a large, air-conditioned house in one of the most expensive places of the planet must be an exception…

      • dmytryl

        But you see, he’s the only one working on saving the world, and hence deserves it and the niceness of house contributes to how well he works. Or something.

        This whole thing is just nuts. What is exactly the chance that the world saving person would be conventionally *such a wrong* choice of a person to donate to, on every single conventional metric? Vanishingly small, that’s what it is.

        Then there’s this idiot calculation of 8 lives per dollar, they literally had someone suggest this on the summit (i might be confusing 2 different people but i think that person later suffered mental breakdown). Okay, suppose if you assume some hypotheses for future, you get 8 lives per dollar from donating to guys that look so bad on conventional metrics. OMG you should donate? No, of course not, you need to recalculate how much money in the bank are worth combined with the strategy of donating to something that has lower probability of being a scam. You would get 8/epsilon, i.e. an incredibly huge number, for money in the bank. I guess this is tied to the whole ‘self improvement’ and ‘rationality’ stuff, which is largely concerned with convincing you that you ought to ‘update’ – when your innate system would not update because the partial updates generate huge superfluous utility differentials in favour of a simpler path when two reasoning paths meet.

      • dEMOCRATIC_cENTRALIST

        I suppose that a large, air-conditioned house in one of the most expensive places of the planet must be an exception…

        It’s trickle-down libertarian economics, don’t you understand? :)

        Indeed, I think the largely libertarian readership of LW is the reason there’s not more dismay about E.Y.’s “requirements.” Ayn Rand lives: who can fault selfishness. And those who contribute, I don’t feel so sorry for them. They do it because they really believe they’ll see the Singularity in their lifetime, and they think they’ll be awarded privileges for furthering the cause.

      • VV

         @dEMOCRATIC_cENTRALIST:disqus
        According to the last survey, most LWers (at least those who took the survey) define themselves either Liberal (36%) or Socialist (27.5%), while only 30.3% are Libertarian: http://lesswrong.com/lw/fp5/2012_survey_results/ . Although you may not realize it from reading the forum since political discussion is largely banned.

        Anyway, I don’t think the issue at hand is selfishness. If Yudkowsky really believed his stuff, he should be working on it with maximum effort even if his motives were purely selfish: Who would want to be killed by evil robots? Especially when you think that any single dollar of funding could make the difference between the evil robots scenario and the scenario where your largest problem is deciding whether to spend your next holiday on Andromeda or the Triangulum Galaxy.

        Yudkowsky could easily halve his salary and free resources to hire an additional “researcher” while remaining well above a subsistence income. He could move himself and the whole operation to a less expensive area (it’s not like whatever they are doing requires them to be based at any specific location) and hire even more staff and fund more “research”. That would lower his current lifestyle, but hey, if the payoff for every dollar of SI funding is so ridiculously bigger than anything else that dollar could buy, why is he syphoning resources to afford his petty luxuries? After all, he himself urges people to sacrifice their lifestyle in order to fund SI.

        His behaviour betrays the fact that he doesn’t really believe in what he claims to believe. He might not realize it at a conscious level, I’m sure that he can come up with a thousand excuses and rationalizations for why he needs such a large salary to fulfil the SI mission, but at the end of the day, at the level of the true beliefs that guide his actions (the “flour on the dragon” level), he knows that it’s all nonsense.

      • dmytryl

        VV:I agree, they may not have such coherence between what they ‘believe’ and what compels them. Yudkowsky, as you noted, is not compelled by that to try to live cheaply. Muehlhauser is not compelled to be a bit more modest and sensible when sending green ink to e.g. Pei Wang, the main editor of an important AI journal, where one might someday want to publish something.

        But do they believe they are deliberately scamming people or do they believe they are saving the world? Probably the latter, IMO. The former would be somewhat painful to believe, and what would compel one to endure that pain?

      • dEMOCRATIC_cENTRALIST

         V.V.

        I’ll admit I’m surprised by the survey results. (I wonder if he has any socialist donors.)

        I think you’re right about the inconsistency. (I find it easy to lose sight of E.Y.’s actual claims.)

        So, why don’t LWers care? Again, I admit I find it mysterious. Maybe Luke Muehlhauser , for our edification, will provide some justification.

        (E.Y.’s being based in Berkeley may well be connected with the Bay Area location of his primary donor, Peter Thiel.)

      • http://juridicalcoherence.blogspot.com/ srdiamond

        I wrote:

        So, why don’t LWers care? Again, I admit I find it mysterious. Maybe Luke Muehlhauser , for our edification, will provide some justification.

        Here’s a theory. They don’t believe E.Y.’s hysterical claims; nor do they expect E.Y. to believe them. They think the claims are necessary PR hyperbole. It’s all for a good cause.

      • dmytryl

        Here’s a theory. They don’t believe E.Y.’s hysterical claims; nor do
        they expect E.Y. to believe them. They think the claims are necessary PR
        hyperbole. It’s all for a good cause.

        I think it’s more complicated than this, and you’re assuming too strong powers of social reasoning. They do at some level believe, and at some level they don’t believe, except the level separation system is working correctly in Yudkowsky (he’s profiting on a difference, that’s probably why one can believe and not believe at same time) and incorrectly in those who donate (excluding Thiel who just loves to throw money around), i.e. losing on a difference.

      • Anonymous

        If EY really is SI’s top researcher, it makes sense to pay him extra to keep him happy and productive.  Also, $88K is really not much money by Silicon Valley standards–kids who are graduating from my university’s compsci department are making $90K starting.

      • dmytryl

        Anonymous: this still wouldn’t answer the question why he wouldn’t forfeit some of the pay for sake of decreasing risk of future evil robots and increasing probability of him living awesomely, yet asks that of the others.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        My take is that they do at some level believe, and at some level they don’t believe, except the level separation system is working correctly in Yudkowsky

        I don’t think we disagree in substance; you’re just speaking more psychologically than I was. :)

        I should point out that what you’re saying fits perfectly the Hanson model of hypocrisy. Far-mode, according to Hanson, is adapted for social hypocrisy and for that purpose evolved as a system separate from near mode. In far mode they believe; in near mode they don’t believe. (I differ from Hanson in that I would say far mode is commonly corrupted for that purpose rather than that it fundamentally serves that purpose.)

        [I have a new near-far posting: “Societal implications of ego-depletion theory and construal-level theory: Ignored transaction costs and proliferation of electoral events”http://tinyurl.com/cgnt4lq 

        The question we’re left with in Hansonian terms is what’s the real function of “existential risk” charities. If ordinary charities are to signal a readiness to help others (so they will help you), what are existential charities for? I leave that, for the moment, to you and Robin.

        But Thiel is getting the best of the deal. He financed Yudkovsky at more than 80 thousand dollars per annum while Yudkovsky, for this sum, didn’t do programming but wrote the Sequences. This was so much E.Y.’s sole task during these years that the discussion on LW that I cited to (mistakenly in part) thought that E.Y. had received an extra bonus of the same sum for that task alone. This extraordinary investment of time and money on a blog allowed Yudkowsky to obtain a coterie of blind followers, while Yudkowsky himself became completely dependent on Thiel for his livelihood. LW is Yudkowsky’s creature, while Yudkowsky is Thiel’s. Thiel (unlike Yudkowsky) is a smart guy; I think he knows what he’s doing and is pleased to have a puppet with a following available should he need one. Why do you suppose that Thiel spent his money on seeding a blog, with nothing technically noteworthy in the offering.

        Billionaires are dangerous, smart or dumb but, of course, particularly if smart. And Thiel’s a smart billionaire.

      • dmytryl

         I think ‘near far’ attaches too much baggage that may not be true. Even the ‘levels’ wording probably does. It suggests that one is closer to the core. One could, for example, process a dubious proposition ‘conservatively': assume proposition only when that leads to win regardless of correctness of the proposition. Or one could be conditioning-reinforcement trained, trough the life, not to be compelled by one’s own conclusions if they are predominantly nonsense, if doing so predominantly results in some form of pain.

        With regards to donors not caring, I recall there was a discussion thread regarding Yudkowsky’s okcupid profile, when they were discussing the NYT article about them. One of the actual donors sort of wondered why Yudkowsky wouldn’t sacrifice his desire to make a weird profile in the name of what ever it is that they are supposed to sacrifice money for (wasn’t wording like this though), which was met with much irrational discourse (obviously).

      • http://juridicalcoherence.blogspot.com/ srdiamond

        I generally agree but I just think ‘near far’ attaches too much baggage that may not be true, into propositions not dependent on correctness of this baggage.

        Without any baggage (including your levels), I think it’s what I first said: one set of PR attitudes they promote (or signal through donations) “honestly” while segregating from the rest of their lives.

        The question is whether construal-level theory’s claimed insights and predictions apply in reality. They can be “tested” in the laboratory or against personal experience. 

        One thing it clarifies for me is why I could be so naive as to fail to fail to recognize that Less Wrong was a Singularity Institute front. Well, in the excessively near-mode discussion that dominates Less Wrong (imposed by the politics ban), the Singularity just doesn’t much come up. Isn’t that strange? If the Singularity is a far-mode concern, then construal-level theory predicts that it will be underrepresented in near-mode discussions. Thus most of the ultrapractical matters LW is dominated by would be largely irrelevant if we’re just waiting for the Singularity. Yet, that doesn’t filter into the discussion.

        Also, consider the astounding Cupid letter. What astounded me about it most was its utter honesty. (This did or should have persuaded me that Yudkowsky is no psychopath.) Yudkowsky is really a near-mode fellow, and to him, being honest is being near-mode honest. He tells what, in a practical ways, are the achievements of which he is proud. It’s NOT a matter of being the only one with the foresight to focus his life on the Singularity. Or even the fact that he has done so, or that he calls upon others to. It’s the comparatively modest truth that he wrote an article a day for two years to put Less Wrong on the Map. Why does the Singularity not appear, when it might even benefit him? Because he is describing himself in near-mode, and far -mode concerns just don’t come up.

        Those are two ways that immediately come to mind that applying construal-level theory (with a dose of homo hypocritus) provides insight into LW.

      • Anonymous

        Here’s what Yudkowsky himself seems to believe: http://lesswrong.com/lw/5il/siai_an_examination/43bs I’d prefer to pay him less money, but I don’t think his current salary is unreasonable, and I don’t think it’s strong evidence that Yudkowsky is a psychopath.
        I’ve never really seen the SI exhort donors to sacrifice their own cost of living to donate more.  And I’ve hung out with SI folks a lot in real life.  (They certainly don’t come across as psychopaths–I’m good friends with about 1/3 of the staff at SI.)  I think they’d prefer it, just like how practically any nonprofit thinks its mission is important and would prefer to receive more funds, but I’ve never seen them exhorting anyone to do that.  Sure, they give you the info necessary to fill in the blanks if you decided that’s what you want to do, but no more.Also, this is pretty much the first time ever that I saw a charity criticized just because they were located in an area with relatively high cost-of-living.  You guys are really holding SI to high standards!  Moving to a different area could be a good idea; maybe put it in lukeprog’s anonymous feedback form?  On the other hand, that will probably make them seem even more like weirdos, and as you guys are demonstrating, seeming like a weirdo can be a BIG problem.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        I’d prefer to pay him less money, but I don’t think his current salary is unreasonable, and I don’t think it’s strong evidence that Yudkowsky is a psychopath.

        I’m sure he’s not a psychopath (Although I’ve recently been wrong about other things I’ve been sure about).

        What I find odd (but which construal-level theory helps explain) is that Yudkowsky wants to be paid an otherwise reasonable salary–his reasoning you cited to doesn’t seem to explain his primary requirement, a large dwelling, when working for his cause that would be furthered by his self-denial; and 2) that you begrudge him his “reasonable salary,” given the importance of the work he is doing. (You should question his sincerity but not begrudge him the meager payment for his lofty work, if you get my distinction).

        Salary is a near-mode question. Both of your attitudes are rather untouched by far-mode considerations when you consider salary. It’s an ordinary market transaction, just another charity, not the one super-critical humanity-saver.

        One question we haven’t touched on here is dmytryl’s point that some LWers take the Singularity very seriously indeed, making great personal sacrifices. Why do some people fail to separate near mode from far when the rules of social hypocrisy encourage the separation? 

        That’s an interesting question; it would be well to have examples. As it is, I’m not completely convinced there are such LWers. Maybe the ones who dropped out of college weren’t doing well to begin with. 

        Doomday cults have existed where near mode is infected. Leon Festinger had some ideas why this happens. Based on his reasoning, it might be that if the possibility of a Singularity is absolutely disproven, this would drive adherents to bring their far-mode beliefs near.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Anonymous,

        You’re not Thiel, are you?

        Maybe you know about this–I’ve seen nothing on the subject, but I haven’t looked hard: how did Yudkowsky come to be able to found the Singularity Institute? Did he somehow prove himself along the way? Did he just throw up a shingle and luck out that donors were willing to trust him? (Surely not that.)

      • Marc Geddes

        The only way Yudkowsky is going to get sufficient funds for Singularity is to win the national lottery ;)

         Ah, those dreams of Singularity 
        Sing along with this:
        “Someday we’ll find it, the rainbow connection.The lovers, the dreamers and me.All of us under its spell. We know that it’s probably magic.”

        http://www.camelotcastle.info/wp-content/uploads/2012/05/Rainbow-connection.mp3

      • dmytryl

         I really dunno whenever he’s a sociopath or not. Making a living by talking other people into giving you money is strong evidence of that. Grandiosity is another. In the okcupid profile he mentions ‘orgasm denial’, which would be somewhat indicative of the broken mirror neuron system (someone’s orgasm should make you feel good too).

        At the end of the day, these so called “beliefs”, for him, do not seem to carry into actions that lead to any loss for him (the near/far sort of obscures this fact), whereas for any donor save for the ultra-rich Thiel, these beliefs carry into actual actions that result in the loss.

        With regards to how seriously these guys take things, someone had nightmares from Roko’s basilisk.

      • gwern0

         Dmytry:

        > I really dunno whenever he’s a sociopath or not. Making a living by
        talking other people into giving you money is strong evidence of that.
        Grandiosity is another.

        I don’t know where to start. No, making money by talking other people is not strong evidence of that: the base rate of sociopathy/psychopathy is maybe 1% and a good chunk of the entire population spends its time occupied in similar things, and especially in academia (which is, if anything inversely correlated with psychopathy). Maybe you should go read _The Handbook of Psychopathy_ and learn what the hell you’re talking about. We’ll wait.

        > In the okcupid profile he mentions ‘orgasm
        denial’, which would be somewhat indicative of the broken mirror neuron
        system (someone’s orgasm should make you feel good too). Then there’s
        all those morality posts, including dust specks vs torture, which are
        doubly creepy considering that he thinks he can make at least dust speck
        of a difference to the immense number of post singularity minds.

        Orgasm denial has nothing to do with psychopathy; the morality posts, by the way, are only weak evidence for something entirely different from psychopathy – being on the autism spectrum! Not that I expect you to change your mind on any of this, since you’ve been harping on LWers and Eliezer in particular being evil and psychopaths and irrational and everything bad for like a year now.

        Layman psychologizing and worthless ad hominems: fail.

      • dmytryl

        gwern0: Not many in the academia just ask for money to prevent some supposed doomsday, and argue that the donors furthermore should do all donations to them and shouldn’t e.g. donate to help children in africa or the like (this ‘donate only to most efficient charity’ thing). I can’t think of a single example. Not many in the academia go on publicly how someone’s AI project is going to kill everyone, if ever successful, with no regards for the potential danger to that person’s due to such remarks. With regards to morality, talking a whole lot about morality is not same thing as being moral, especially if the morality in question is quite weird (preference of torture over dustspecks for example).

        edit: btw, a remark. Your ‘competent murders’ part of terrorism article is really creepy. WTF is it there for? You know there’s all sorts of crazies, ITTS for example, why inspire such people?

    • rrb

      I doubt he makes $140,000. I looked up SIAI’s form 990 on guidestar.org and it reports that he made $88,200 in 2010. So you’re saying he got a $60,000 raise in the last two years? Where are you getting this from?

      • rrb

         (yeah I know I screwed up my subtraction there :p)

      • dmytryl

        How is that stuff taxed? Maybe he was speaking of pre-tax equivalent for non-charity employee.

      • rrb

         I’m not sure… SIAI doesn’t pay taxes on *their* income, but I think the *employees* of tax-exempt organizations pay income tax.

        According to this page from the IRS website, a tax-exempt organization has to withhold part of their employees’ salary and send it to the government:
        http://www.irs.gov/Charities-&-Non-Profits/Exempt-Organizations:-What-Are-Employment-Taxes%3F

        So it looks like the employees are paying taxes, unless they can get it back at the end of the year somehow?

      • dEMOCRATIC_cENTRALIST

         dmytryl,

        Why? Wouldn’t you expect that Yudkowsky would grant himself an ever-increasing income? If I were to estimate how much E.Y.’s salary increased over the past two years, sixty thousand dollars would seem about right. Would someone like E.Y. settle for anything less? (I know you’re skeptical of psychological speculation … but still.)

      • dmytryl

         dEMOCRATIC_cENTRALIST:

        It’s like meeting a teenager and they’re a lot taller than the last time you seen them…

        With psychological speculation… people are complicated, the beliefs are complicated, there’s various layers, there’s people believing various things for fun while simultaneously having a perfectly good mental model of those things being false, etc. People don’t live just for money.

      • dEMOCRATIC_cENTRALIST

        People don’t live just for money.

        True, but someone as narcissistic as Yudkowsky will insist that he get in proportion to his worth as it demonstrably increases. Since he believes he has accomplished great things in the past couple of years, he must also believe he is entitled to large increases in income.

        Of course, the map isn’t the territory and all those platitudes. But that’s no reason to avoid psychological maps.

      • dmytryl

        dEMOCRATIC_cENTRALIST:Well, that’s true, no doubt he believes he’s making up for what he’s paid. I’d say, such people believe their stuff but don’t find it the least bit compelling. Dual mental model, the belief thats at work is the one resulting in maximal gain, inclusive of feeling good.

        By the way, Muehlhauser has been line-dropping on every possible occasion that Yudkowsky works like 4 hour a day at most. I wonder what’s up with this.

      • dEMOCRATIC_cENTRALIST

         I don’t have time to try and retrieve it, but I am certain that E.Y. (within the last year) responded to a question on LW concerning his income, and he responded that he made $140,000.00 per annum. He added that the main expenditure was the result of his intolerance of small dwellings. He wanted a large house. Honest. No question about it. It’s on the record (unless he expunged it, which I doubt).

        As to his previous salary, I dunno, but I know that at some point he went from senior fellow or some such to main researcher or some such. I’d guess the change in title called for a higher salary.

        (Oh, I’m srdiamond)

      • rrb

        Are you sure you’re not just misremembering *this* comment thread?

        http://lesswrong.com/lw/5fo/siai_fundraising/4g9v?context=3#comments

      • rrb

        I’m about to sound like I use my time really poorly but I looked through every Yudkowsky post that mentioned “140” and didn’t find anything talking about his salary.

      • dEMOCRATIC_cENTRALIST

        I’m about to sound like I use my time really poorly

        Well, if you’re an ardent Yudkowskyite, even a donor, it might be worth your time if you’re amazed that he demands so much for himself in the service of his cause. Otherwise, your inquiry has some positive externalities if his more naive followers take not.

        Yudkowsky doctored the post using the “Edit” feature: without noting it! (Isn’t it odd that his past salary is discussed but his current salary never mentioned?)

        My best guess to which post it was is http://lesswrong.com/lw/5fo/siai_fundraising/413g

        “Large air-conditioned living space, healthy food, both for 2 people (myself and girlfriend). My salary is at rough equilibrium with my spending; I am not saving for retirement. The Bay Area is, generally speaking, expensive.”

        Isn’t it odd that he discloses his living conditions but not his salary? Anyway, he did disclose his salary but had second thoughts. Plus, I think he strengthened the one where he had said he didn’t welcome discussion; now it’s closed. So, it’s pretty clear he regretted having let it go as far as it did, leading to an unwanted disclosure.

        Finally, you could start with the requirements for renting a large air-conditioned space in Berkeley. It’s expensive. He was justifying the $140,000.00 figure, which he chose to remove.

        Maybe he is a psychopath.

      • rrb

        dEMOCRATIC_cENTRALIST:

        The conversation you link to was a comment on a post that contained his salary. If you click “view the original post”, you’ll see it in a big table labeled “Officer Compensation”. Everybody in that comment thread was talking under the understanding that Yudkowsky makes $90,000, as stated in the post.

        Also, edited comments contain an asterik next to the time. There’s no asterik on Yudkowsky’s comments.

        He didn’t remove the $140,000 figure. There never was one, you just misremembered.

      • dEMOCRATIC_cENTRALIST

         rrb,

        Your argument seems sound. Shouldn’t trust memory so much, especially when feeling certain.

        Maybe Yudkowsky is less narcissistic than I supposed.

      • Luke Muehlhauser

        > I am certain that E.Y. (within the last year) responded to a question on LW concerning his income, and he responded that he made $140,000.00 per annum.

        This is easy to check, because you can search LessWrong for “140,000” and “140k”. When you do, you don’t get any results in which Eliezer said he made $140,000 per year.

    • Luke Muehlhauser

      Yudkowsky does not make $140k/yr, and never has.

      Also, in what sense is SI a “front” for Peter Thiel? Peter Thiel funds us, but he doesn’t have much influence over what we do on a month-to-month basis.

    • Anonymous

      Wow, you’re quite the conspiracy theorist aren’t you.  I wonder who Peter Thiel is a front for?  Probably the CIA.

      • gwern0

         Well, obviously he is. Just think about how useful seasteads would be to CIA ops! And then of course you need to analyze all the info collected, and what better than a AI?

        Just don’t ask about how LW is a front for their sinister homeschooling agenda. You wouldn’t be able to sleep at night.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        E.Y. gave you a job (at long last). You’re hardly objective. It would be seemlier were you to provide arguments rather than bad irony.

      • http://www.gwern.net/ gwern

        Right on schedule; stay classy, srdiamond!

        Still, I should count my blessings: better to be accused of being biased because I sometimes work part-time for Luke (not EY) than to be accused of being a psychopath or a billionaire’s bagman.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        You’ll arrive one day, gwern(0). Your identity is so well camouflaged that nobody knows who you are. By the way, your advice to terrorists on your blog sure was “classy.” You don’t know what the word means.

      • http://www.gwern.net/ gwern

        Sounds to me like someone doesn’t understand my terrorism essays if they think they’re ‘advice’.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        A few points for clarity; gwern(0) has one skill: obfuscation.

        1. I made a joke once on LW about the Singularity Institute being a device to promote homeschooling. (You know, given that they’re all dropouts.) Everyone got it but gwern(0), who obviously can’t get over it.

        2. I never asserted that E.Y. is a psychopath. I entertained it and firmly rejected the possibility.

        3. Nobody has denied that rightist billionaire Peter Thiel is absolutely critical financially for the survival of the Singularity Institute. Nor has anyone challenged the truism that control of the purse strings equals power to control the agenda. Couple this with the obvious fact that LW is an EY fan club, that two-years worth of salary were poured into LW by Thiel by “seeding” it with lengthy daily postings from E.Y.–that writing the Sequences to put LW on the map was the Singularity Institute’s project, funded by Thiel.

        Then, unless you’re very naïve about the role of rightist billionaires in U.S. intellectual and political life, you won’t be complacent about one having financial control over a public figure made so by that same reactionary billionaire. 

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Sounds to me like someone doesn’t understand my terrorism essays if they think they’re ‘advice’.

        Sounds to me that you don’t understand your terrorism article, if you don’t see how “unclassy” it is. Self-knowledge: the ultrascarce resource among LWers.

      • http://www.gwern.net/ gwern

         > Sounds to me that you don’t understand your terrorism article,
        if you don’t see how “unclassy” it is. Self-knowledge: the ultrascarce
        resource among LWers.
        I’m sorry, was there supposed to be an argument in there? Are you sure you know what the points of the essays were? Is this the same way you were “sure” that Eliezer is paid $140k a year by SI?

        > Everyone got it but gwern(0), who obviously can’t get over it.

        Er, what? No one got it. There was no humor in it. You were apparently sufficiently ashamed of your claim to delete it entirely. Fortunately, I am used to such shenanigans and had quoted it in my reply: http://lesswrong.com/lw/ct8/this_post_is_for_sacrificing_my_credibility/7d9r I invite anyone to read it and the other quoted parts, and srdiamond’s comments here, and see for themselves whether srdiamond’s description of the matter is remotely accurate.

        > 2. I never asserted that E.Y. is a psychopath. I entertained it and firmly rejected the possibility.

        Wow. Good job. BTW, I’ve entertained the possibility you enjoy raping small animals, but don’t worry! I’ve firmly rejected the possibility for lack of evidence.

        (Also, that comment was directed more at dymtry, who continues to ‘entertain’ the possibility.)

        > 3. Nobody has denied that rightist billionaire Peter Thiel is absolutely
        critical financially for the survival of the Singularity Institute.

        No one has, that’s true. He provides somewhere under half the funding – as I’ve pointed out myself.

        However, how important is that? I don’t know of any connection between Thiel’s funding and LW or Eliezer posting on LW. And as far as I know, Thiel has minimal influence on LW content and SI itself; I’ve never seen an article taken down apparently because of Thiel, even as LWers mock cherished beliefs of Thiel’s like Christianity, and he’s never come up in any of my assignments in the past (much less in a censorious role). So I can’t say I particularly care about that observation. Thiel funds a lot of things, and SI seems to be one of them.

  • http://juridicalcoherence.blogspot.com/ srdiamond

    Anonymous,

    I don’t respond to anonymous critics calling people names since they lack the courage and integrity to leave their own name–or even a standard handle.

    (I’m also democratic centralist)

  • dmytryl

    Cryonics:

    I am currently doing some software job for examining block-face microscopy data. I have some questions for cryonics proponents:

    a: Why are you not more concerned that actual neuroscientists as well as accomplished scientists – that is, people who we know for sure have seen further than the others – seldom sign up for cryonics, while the people that advocate cryonics seem to be first-generation atheists and generally not have some tested see-further-than-others accomplishments ?

    b: I can’t but note absence of any good messy offers (that would have irrational people recoil in horror). Such as cutting up the brain into small pieces and fixing those with proven sample preparation techniques, and accepting the diffuse brain damage that will result – hey, at least you know that the bulk of it has not been shredded by ice. And abundance of irrational offers, such as whole body preservation and the talk about ‘nanobots’ (never mind the technical in feasibility, the data will need to be processed together at global scale in any case, you’re still uploading the whole).

    c: From what I can gather, the economics of such is completely nuts. The whole body preservation being done at loss, for instance (forgot a reference). The whole body should be the extravagant throwing away money option.

    d: It seems to me that anyone who already have frozen someone has real trouble getting themselves to take the risk of a study that can demonstrate that previous methods unambiguously failed.

    • gwern0

       > The whole body preservation being done at loss, for instance (forgot a reference).

      I haven’t heard that. Actually, I was reading one ALCOR page on the economics of storing people in their dewars and one section was on how the whole-body people were subsidizing heavily the neuro people: apparently you can fit one whole-body into a dewar with enough room left for like 3 heads, so for each whole-body patient you have to buy 1 dewar regardless but you get space for 3 heads ‘for free’. Or something like that, anyway. Not something I was expecting.

      • dmytryl

        this:
         
        http://lesswrong.com/lw/bk6/alcor_vs_cryonics_institute/6a0c

        seems to agree with me, but I originally heard that somewhere else. They should price the whole body very high to cover the costs for scientific experiments. The prices look high enough to be able to afford to do some very good science if you are into it, from the margin on just few deals. A lot of experiments with animal heads, not just some 2d images with no ice from 1 rabbit head. Or ask one of very rich clients for funding scientific aspect of it.

      • http://www.gwern.net/ gwern

         More’s with CI, not ALCOR, and it sounds like an offhand estimate. I don’t know what sort of dewars CI uses, but they could have some similar cross-subsidy which he is not factoring in.

      • dmytryl

        I remember I was very surprised when I read it, though. It wasn’t this comment, it was something else. In any case, they seem to do surprisingly little experimental testing. Anyone serious would consider research a basic necessity, include it into price, and have tens cryo-preserved animal brains of different species, and wouldn’t have the issue of, quoting Hanson, “The people who developed the anti-freeze published some 2D pictures that look good, but we don’t know how selectively these were chosen, or how much worse is the typical cryonics freezing process. “.

        edit: In any case, I think plastination may have very good scientific support quite soon.

      • http://www.gwern.net/ gwern

        People are willing to donate for supporting existing cryopreservations or with an eye to their own cryopreservation, no one regards better research as more of a priority than their own skins. I’m not sure this is wrong, but feel free to donate to the Brain Preservation Prize, which is targeted at exactly this problem. (I already have.)

      • dmytryl

        > no one regards better research as more of a priority than their own skins

        There’s probably orders of magnitude more people contributing to better research than people signed up for cryonics, actually…

      • http://www.gwern.net/ gwern

         Involuntarily, though! One thinks of the polls taken during the Apollo program showing only a minority of the American populace in favor (although pretty much everyone had to pay taxes).

      • dmytryl

        This minority still outnumbers cryonicists by a huge factor. If cryonics companies don’t have funds for research they should raise the prices by few tens percent, especially on full body. Research is not that expensive if you are into it, a lot of labs do amazing things on rather small budget.

        Taxes work to get us to the moon; libertarianism doesn’t work enough to freeze a few cow heads under oversight, or so it seems.

      • VV

         @gwern0:disqus

        More’s with CI, not ALCOR

        No. http://en.wikipedia.org/wiki/Max_More
        “At the start of 2011, Max More became president and CEO of the Alcor Life Extension Foundation”

      • VV

         @google-8a859b151b507f070cefe46a035c0a99:disqus  If cryo companies were seriously interested in research, they could certainly find various ways to do it.

        Consider a cryopreservation lottery, for instance: Cryopreserve a body or head charging half the price, then an independent and trusted party tosses a coin. If head comes up, the “patient” is kept intact, if tails comes up, it is dissected for research.

        Given that the probability of cryonics success is so small, and there is so much uncertainty on it that even the order of magnitude is difficult to estimate, halving it won’t significantly affect any expected value estimation that you may be doing, and the reduced price would open it to people who wouldn’t be able to afford it at full price.

        But of course real-life cryonics would recoil in horror at the thought of their precious brains and bodies being dissected, experimented with and finally discarded in the waste bin, even though if no research is done they may be well doomed anyway, even if their ice graves are kept intact until the end of time. But cryonics is not about rationally increasing your lifespan. It’s the transhumanist equivalent of a religious burial ritual.

      • dmytryl

        VV:

        I’m wondering if fixing the tissue with strong crosslinking fixative and storing in a jar is better idea… if I know that the information is very likely in forms of proteins and their placement, but I do not know their solubility, for purposes of preserving information, I’d rather cross-link the hell out of it and put it in a jar than perfuse with solvents and then freeze. Dipping a book in solvent may keep the book easy to open, but dipping the book in fixative is better if you want to, you know, not wash off the ink.

  • http://juridicalcoherence.blogspot.com/ srdiamond

    I got an e-mail from Robin informing me that one should never have four comments visible under “Recent Comments.”

    Have other frequent contributors gotten similar notices? Perhaps I’m an extreme case, which I’d be glad to take account of, but the rule seems, to put it bluntly, self-aggrandizing and antirational. You can’t hope to have discussion with rational give-and-take when continuing a discussion is made a cost.

    The impression it gives, which I don’t know the accuracy of, is that Robin is determined that the OP always remain the center of the discussion. What purpose does such a rule serve; most blogs thrive on more discussion.

    What gives with this silliness?