Open Thread

Here is our monthly place to discuss Overcoming Bias topics that have not appeared in recent posts.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://www.nancybuttons.com Nancy Lebovitz

    The Harlem Renaissance section here is a striking example of raising your goals enough to make success possible rather than just putting out the usual amount of effort.

  • HH

    With the VP debate going tonight, I’m curious about one thing. The media have been going on and on about how there are lower expectations for Palin because Biden is a veteran debater and Palin’s recent debate opponents have been moose. [I may be confusing stories here.] I wonder, what does that constant attention to the lower expectations for her actually do to those expectations? If everyone’s been made aware that not much is expected in the debate, does that have the effect of raising the expectations? Or does it simply remind more people to expect less?

  • Tim Tyler

    I’ve written an essay about the wirehead problem – check it out if you are interested: http://alife.co.uk/essays/the_wirehead_problem/

  • Ben Jones

    Tim, interesting stuff. Only thing missing is a foolproof definition of exactly what sort of optimization process constitutes wireheading and what doesn’t. This is a very basic gripe, I know, but an important one nonetheless.

    You and I would look at an alien paperclip maximiser and say ‘Wirehead’. However, the paperclip-worshipping civilisation that built it would think of it as a perfectly sensible, ethically justified system – a utility maximiser. Ditto our AI that reorganises the solar system to maximise computation without harming any living being. Great for us, bad for solar orbit maximisers. Every optimisation process is someone’s wirehead. Utility is in the eye of the agent.

  • http://occludedsun.wordpress.com Caledonian

    The essence of wireheading is the bypassing of evaluative functions and producing reward sensations directly.

    A paperclip-manufacturing AI wouldn’t be wireheaded by definition, because it has to look at the world and detect certain configurations in order to feel rewarded. A wireheaded AI wouldn’t care about external conditions at all — it would just feel great, all the time, for no reason, and regardless of whatever else was happening.

  • Peter

    Can we have a month where we don’t talk about paperclips and renegade AIs?

  • Tim Tyler

    A wireheaded AI wouldn’t care about external conditions at all – it would just feel great, all the time, for no reason, and regardless of whatever else was happening.

    Maybe – though the example of the heroin addict suggests things are not necessarily always that simple. Curt once gave Enron as an example of a company that had stopped trying to make shareholder profits (the normal utility function for such a company). Yet it still acted as though it had some preferences – it behaved as though it wanted to cover up its accounting scam for as long as possible.

    I haven’t tried to formally define wireheading in the essay – but it doesn’t seem critical to understanding the basic problem. Essentially, wireheading is a change resulting in the generation of reward for something that “shouldn’t” generate reward – to the point where decidedly odd behaviour results. (Interpret “shouldn’t” as you will.) There’s also a kind of “negative” wireheading based on pain-killing.

  • citrine

    How about an answer to the Prediction v Explanation puzzle?

  • Recovering irrationalist

    Hi. In 3 weeks there’ll be even more Bayesians in the Bay Area than usual. Would anyone, locals or visitors, potentially be interested in an informal OB meetup near to that weekend, possibly on an evening or the Sunday?

  • Clusterpost

    On the (recently closed) Awww(ful)-thread: The cluster of authors from one IP is (obviously) a fictional persona – a hard core singularitarian transhumanist who takes things a bit too seriously and a bit too far, in particular, the “deny humanity, deny yourself, transcend biology” memeplex. Awful? It’s supposed to be awful. But why is it awful? Why are such convictions, the denial of the primacy of human values, needs, and instincts, disagreeable? Non-fictionally, I’ll be a fan of Eliezer forever, supporting his freedom of choice, whether abstinence, girlfriend, or one that self-replicates indefinitely. I can’t speak for the other guys who mentioned such ideas – are there really people who think like that, in RL?

  • http://yudkowsky.net/ Eliezer Yudkowsky

    One of the things I have scheduled – it remains to be seen if I’ll get there, because I’m already on overtime – is a sequence on Fun Theory. That answers the objection with respect to the future of humanity. In current practice, the answer is that it isn’t necessarily true that you can get more scientific work done without an SO; that will vary depending on temperament, resources, and of course the girlfriend in question. The calculation is worth doing, but it’s not a foregone answer one way or the other, and I have no intention of going into the details in my case.

  • http://pdf23ds.net pdf23ds

    “that will vary depending on temperament, resources, and of course the girlfriend in question”

    Including various psychological factors like security/insecurity, aloofness, neuroticism, depressiveness, and other miscellaneous psychobabble.

  • http://shagbark.livejournal.com Phil Goetz

    A paperclip-manufacturing AI wouldn’t be wireheaded by definition, because it has to look at the world and detect certain configurations in order to feel rewarded. A wireheaded AI wouldn’t care about external conditions at all — it would just feel great, all the time, for no reason, and regardless of whatever else was happening.

    This relies on being able to distinguish internal and external worlds. If the paperclipper is so powerful that you might as well call the solar system its “body”, how is detecting configurations in the nearby physical world different from detecting impulses in your brain?

  • http://shagbark.livejournal.com Phil Goetz

    The cluster of authors from one IP

    How do you see an author’s IP?

  • Nick Tarleton

    Why are such convictions, the denial of the primacy of human values, needs, and instincts, disagreeable?

    Because there’s no light in the sky, outside of humanity, for values to come from. If you reject all our evolved preferences as philosophically invalid, what’s left?

    Would anyone, locals or visitors, potentially be interested in an informal OB meetup near to that weekend, possibly on an evening or the Sunday?

    Yes. (Visitor, don’t know yet exactly which days.)

  • http://shagbark.livejournal.com Phil Goetz

    How do you see an author’s IP?

    [read thread.] Oh. Never mind.

  • Z. M. Davis

    Thought for the day–probability theory and decision theory push us in different directions: induction insists that you cannot forget your past; the sunk cost fallacy demands that you must.

    Recovering: “Would anyone […] be interested in an informal OB meetup near [Oct. 25]?”

    Count me in!

  • Anti-reductionist

    Nick Tarleton: If “right” is just whatever people value, that means that if you kill everyone who doesn’t have moral value X, X automatically becomes true.

    I guess the paperclip AI isn’t so bad after all! Once all the humans are dead, there’s nowhere for value to come from but the AI itself, so…

  • Psy-Kosh

    anti: no. the notion, as I understood it, amounted to this:

    When we say “should/moral/etc…” we mean something. We may not fully be able to articulate that meaning, and we may have trouble working out what actually fulfills the various criteria corresponding to that, but to the extent that there is an associated meaning/question/computation encoded into our brains that’s associated with the relevant words, that’s what we ought to appeal to.

    That does _not_ mean it’s “oh, whatever people happen to value”

    it’s more the notion that the term morality refers to something specific. It happens to be that people tend to value this stuff called morality. And the beings that don’t value it, well… they’re by definition immoral, so there’s a limit to how much their opinion ‘should’ count. (should, of course, being a word that translates to whatever those partly “black box” criteria of morality ultimately turn out to be)

    To the extent one rejects this notion, one’s going to have trouble talking about morality at all. I mean, presumably you mean something by that. Even if you can’t articulate precisely what you mean, or what the outcome of the “morality computation” is is something you can’t at this time determine accurately. To the extent that it does mean something specific (That is, that it’s a lever in your mind to a certain “black box” that computes morality), it doesn’t matter what people think is moral.

    (did that come out relatively clearly)

    ie, recall the distinction between a calculator trying to calculate the answer to the question “what’s 3 + 5” and a calculator trying to calculate the answer to the question “what does this calculator think is the correct answer to 3+5”? The latter can be more or less anything, but the former has a unique answer.

  • http://profile.typekey.com/arundelo/ Aaron Brown

    Eliezer: have you read or do you plan to read Anathem, the new Neal Stephenson novel? It has elements that remind me of your conspiracy stuff.

  • Femme

    Regarding: Past behavior, the one you used to know. 99

    Please don’t tell me this moment is not a bias:)

    http://ca.youtube.com/watch?v=8OyD_ZfqXXw

    Anna:) My abstract view

  • http://www.iphonefreak.com frelkins

    @Psy-Kosh

    “It happens to be that people tend to value this stuff called morality.”

    Not really.

  • Thomas Ryan

    I have no immediate peers. Do a lot of OB readers have my same problem? I still stand by my decision to stay out of college, but I wish I could be around people with similar interests. I do have friends in college, but none of them are as passionate as I am with my interests. Math, writing, reading, good movies and music…. Yet, I feel like I have many years to go before I can contribute anything to any of these fields (meaningful or not, I feel obligated to try).

    This is of a few places that I can go and feel among “my people.” How many of you are like this?

  • Doug S.

    Something interesting I saw recently on the subject of economics:

    This Economy Does Not Compute

    My intuition suggests that the kind of modeling being described in this article should be extremely valuable. As there are many economists that read this blog, I’d like to hear what they think.

  • Z. M. Davis

    Thomas Ryan: “[…] How many of you are like this?”

    At least one! I’m a lonely dropout-cum-generalist-autodidact as well. You can email me at: zack m davis {-at-} yahoo point cahm (no spaces; you will forgive these cumbersome antispam measures) if you want to talk.

  • michael vassar

    I’ll be in the bay area for a month, roughly, 10/14 to 11/17, so yes I’m happy to join people for an OB meet-up, preferably relatively early in that period.

  • mjgeddes

    I’m sorry readers have had to endure another month of straw-men, misconceptions, non sequiturs, ideology and superficial analysis.

    In particular, the idea that intelligence is somehow reducible to a purely functional description (‘Bayesian Induction’, ‘Optimization’) could be a *big* mistake. That’s *one* aspect of intelligence – *optimization* is a big insight to be sure, but I don’t for one moment believe that that’s sufficient to encompass a full definition of intelligence. A more abstract (higher-level) description would base intelligence on *the aesthetics/elegence/simplicity of ontological representations*. I don’t for one moment believe that calculation of semantic similarites (the basic operation at the ontological level of abstraction) is reducible to Bayesian Induction, although of course there would have to be a Bayesian component to it.

    What if it turns out that Bayesian induction is not sufficiently general to fully encompass intelligence? In summary, knock over the ‘Bayesian Induction’ domino, and the rest of the AGI stuff posted here woud collapses like a house of cards, you EY fan-boys realize?

    Libertarian fan-boy faith has also come crashing down, with the US economy in near total melt-down. This was a common feature of ‘free’ markets as far back as records go. Thanks goodness the Libertarain ideology prompted by many self-proclaimed *geniuses* here will never be implemented.

    The moral of all this is this great quote:

    Conservatism is suspicious of thinking, because thinking on the whole leads to wrong conclusions, unless you think very, very hard

    -Roger Scruton

  • http://blog.greenideas.com botogol

    @ Thomas Ryan You’re not alone

  • Ben_Wraith

    Thomas Ryan, Z. M. Davis:

    I would think there are a good number of OB readers like this. Myself included, and although I was raised homeschooled and I am seriously considering college I find autodidacticism pretty appealing. [My email is naxicasa {-at-} gmail point cahm, if you care.]

  • Joe

    Regarding wireheads:
    The first question asked in your article, Tim, is “How can we prevent wireheads from arising?”

    Why do we want to prevent them from arising?

  • Wirehead

    Indeed, I’m a wirehead and I like it!

    🙂

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Joe, I’d like to sell you a drug that will make you believe you’re posting comments to Overcoming Bias. This will be much more convenient than actually posting them.

  • l

    Eliezer, are you implying there is some goodness that can’t be simulated?

    Did I interpret that wrong?

  • http://occludedsun.wordpress.com Caledonian

    Eliezer, are you implying there is some goodness that can’t be simulated?

    You can’t buy integrity. You can’t simulate the value of reality. For any method X, you cannot use X to produce ‘goodness’ incompatible with X.

  • Ian C.

    In the movie Pi, whenever the protagonist gets stuck, he restates his premises:

    1. Mathematics is the language of nature
    2. Everything around us can be represented and understood through numbers
    3. If you graph the numbers of any system, patterns emerge
    4. Therefore, there are patterns everywhere in nature
    Hypothesis: Within the stock market there are patterns as well.

    If that movie was about an AI researcher, what would his premises be?

  • Valter

    To Doug S:

    you may be interested in this reply to Buchanan; if you are interested in agent-based computational economics, go here.

  • http://bccy.blogspot.com fr

    @Ian

    “In the movie Pi, whenever the protagonist gets stuck”

    Ian C., may I point out that Max, the protagonist in Pi, is crazy? He is paranoid, delusional, self-harming, and as Darren has stated, “addicted” to a self-created monster. I wouldn’t look to Max as a reliable guide to any sane endeavor. Unless you are truly interested in the premises of a lunatic AI researcher?

  • Ian C.

    @fr: Max may have been crazy, but I don’t think making your premises explicit is crazy. I would say most AI researchers presume:

    1. Intelligence is general
    2. Reductionism is true
    3. A Von Neumann machine can do everything a human brain can

  • http://occludedsun.wordpress.com Caledonian

    Max wasn’t crazy. He was afflicted with chronic migranes, possibly resulting from his intuitive understanding of the mathematics behind a very strange attractor that may be involved with the spontaneous generation of life.

    Unless your premise is that the entirety of the movie was nothing more than a series of hallucinations, Max was perfectly sane. Impressive, given that he was pursued by both Wall Street executives and hysterical Kabbalists.

  • Michael Howard

    Recovering irrationalist: an informal [Bay Area] OB meetup near to [the weekend 24-26 Oct] possibly on an evening or the Sunday?

    Nick Tarleton: Yes.

    Z. M. Davis: Count me in!

    michael vassar: yes I’m happy to join

    Cool, 4 so far, any more?

    OK, time to cast anonymity to the winds. Anyone interested who hates posting, mail me at… *eyes Spambot* … cursor_loop 4t yahoo p0int com.

    As someone who recently moved from a tiny Northern place to London (Wow, real live Bayesians and Transhumanists running wild!) I can confirm, this kind of stuff is much more interesting face-to-face, much more motivating, and definitely worth encouraging!

    If I get more replies I’ll ask Robin if we can do a meetup post, then we can decide when, where and what.

    Mike (aka Recovering)

  • http://rhollerith.com/blog Richard Hollerith

    Eliezer, are you implying there is some goodness that can’t be simulated?

    Another holodeckist (holodecker?? holodeckard??). Gosh, there are lots of them!

  • Doug S.
  • Lars

    I’m impressed by how much compartmentalization the brain can do. For example, I have a quite different viewpoint on politics, society, and the world when discussing these things with my roommate than I do back at home. Yet, both viewpoints seem to be built of very strong and genuine convictions. It seems we humans will vary much of our thought depending on the group we are around.

    I’d be interested to hear more analysis of mating and reproduction, though I don’t have specific to contribute at this point.

    @Roger Scruton

    Please, avoid inflammatory rhetoric that doesn’t contribute. Save that for Youtube commentary. Thanks.

  • Tim Tyler

    The reality of the simulated happiness of a wirehead seems beside the original issue. A wirehead may well be genuinely ecstatic – but that’s a problem for everyone else, since it typically shorts out their motivation circuits and prevents them from usefully contributing to society. Potential wireheads are unreliable – they regularly need a factory reset. Products like that would suffer from poor reviews and reduced sales.

  • jamie

    About the ig Nobel winners: I think that it is quite amazing that slime molds are capable of learning.

  • http://www.iphonefreak.com frelkins

    @Lars

    You know I was never a fan of Roger Scruton, but I confess his Xanthippic Dialogues is quite witty.

  • Datebait

    Ig Nobel:

    PHYSICS PRIZE. Dorian Raymer of the Ocean Observatories Initiative at Scripps Institution of Oceanography, USA, and Douglas Smith of the University of California, San Diego, USA, for proving mathematically that heaps of string or hair or almost anything else will inevitably tangle themselves up in knots.
    REFERENCE: “Spontaneous Knotting of an Agitated String,” Dorian M. Raymer and Douglas E. Smith, Proceedings of the National Academy of Sciences, vol. 104, no. 42, October 16, 2007, pp. 16432-7.

    If only one could extract energy, or any utility, out of knotting,

  • Will Pearson

    I’m interested in the evolutionary origin of grief and the fear of death. Considering it seems to be a driving force of much of what Eliezer and other Transhumanism

    Sure I get choked up emotionally when I think of a loved one dying, but then other emotional responses such as a queasy fear of public speaking, I try and overcome. What makes one emotional response appropriate and another to be squashed. The prevention of death for all humanity has not grabbed me intellectually, like it seems to have done for other thinkers. I’m curious why not.

    Can anyone recommend this book?

  • Tim Tyler


    The prevention of death for all humanity has not grabbed me intellectually, like it seems to have done for other thinkers.

    IMO, the most probable way that the “death” problem will be fixed is to use machines whose brains can be backed-up and copied. Since I doubt there will be very many uploads, full implementation of that solution will most likely entail the eventual deaths of most humans.

  • http://pdf23ds.net pdf23ds

    Will: My case may help to understand all these responses. I have a normal aversion to self-harm, but no fear of death. I have an attachment disorder whereby I’m extremely unwilling to be emotionally open with people. I also have no fear of public speaking at all. So it could be that these are all linked together.

  • Doug S.

    I have a normal aversion to self-harm, but no fear of death. I also have no fear of public speaking at all.

    This also describes me… I don’t know if I have an attachment disorder as such, but my mother says that I lack empathy for other people and care too little about how other people view me.

  • Joe

    “Joe, I’d like to sell you a drug that will make you believe you’re posting comments to Overcoming Bias. This will be much more convenient than actually posting them.”

    I’ll assume you’re not flippantly asking me to shut up, but instead trying to make a point.

    Let me then be more specific in asking why we don’t want to be wireheads/holodeckists. The real question is, why do you think our end goal isn’t happiness?

    I have found that if I ask people why it’s not a good idea to go become an opium addict (another form of wireheaded-ness), the discussion usually continues along the lines of “because then nobody would harvest the crops, and society would fall apart, etc.” Now, obviously that’s bad, and the same argument holds for wireheads. However, all the things people use as arguments against wireheads seem to end up saying that in the long term, wireheads are bad because they decrease aggregate happiness.

    So, the question is, are we optimizing for something other than happiness? It always seemed to me that the answer was no, but that we were afraid to focus on only the happiness because we instinctively knew that this would cause us to make mistakes. Even people who think their morality come from God seem like they’re doing what God says because they think it’s what has the best chance of increasing happiness.

    I think saying “our goals are more complicated than just happiness” is a refusal to consider the question of what our goals are. I’ve never heard an example of something that is mostly-universally thought of as a Good act but which does *not* optimize for happiness.

  • http://occludedsun.wordpress.com Caledonian

    I think saying “our goals are more complicated than just happiness” is a refusal to consider the question of what our goals are.

    There are other reward states besides happiness. Much of human satisfaction lies in switching between various reward states — by themselves, any one of them quickly becomes stale and uninteresting.

  • http://pdf23ds.net pdf23ds

    Joe, I think empirically speaking our goals are usually much more complicated than happiness. For instance, parents tend to be considerably less happy than the childless but rarely report regretting that they had children, and in fact report a sense of being fulfilled by the experience.

    Wireheading is something that happens to certain optimizing systems that makes them much less effective than others in gathering resources. Wireheaders (mainly drug addicts in their current incarnation) are economic failures. I don’t think this paragraph directly contradicts any assertion you’ve made, though I do think it needs to be addressed from your position.

    I think the most accurate sentence describing human motivations is something along the lines of the following. Humans consciously try to optimize for happiness, but conscious intentions have only an indirect (and some would say small or nonexistent) effect on actions; humans are built to (unconsciously try to) optimize for the persistence of their genes in the evolutionary environment, and thus to a large degree their actual unconscious motivations (i.e. their actual behavior interpreted ex post facto as optimization behavior) in the modern world is simply incoherent (i.e. not optimizing for anything in particular).

  • http://pdf23ds.net pdf23ds

    “parents tend to be considerably less happy than the childless”

    On self-reported happiness scales.

  • Douglas Knight

    pdf23ds,
    I think you are mistaken about the happiness research. Parents do report being happier than the childless. They just have the false belief that time spent with their children is fun. This is not a great case of multiple goals: you could contrast the moment-by-moment happiness assessment with the overall assessment, but it’s not so much that these are different goals, as that people have false beliefs about their relation. (But this is not to dismiss the claim that there are multiple goals. I think “happiness” should be broad enough to encompass them, but moment-by-moment assessment of happiness is not.)

  • http://pdf23ds.net pdf23ds

    Douglas, your recounting of the research is probably more accurate than mine. But I do think that it goes slightly beyond “time spent with children”, rather to “years lived with children”. People with grown children are happier than childless people of the same age, but people still raising children are less happy though more fulfilled. Does that sound about right? (Admittedly I’m being lazy in not looking this up again. So goes the internets.)

  • Douglas Knight

    I was mainly thinking of this study which admits that other studies get a variety of results, though some go away when other factors are controlled. It is the apotheosis of that concern, using identical twins.

    It asks “satisfaction,” which you might put with fulfillment as opposed to happiness. I suppose there are studies comparing these choices, but I don’t know them. I’d guess that they are pretty highly correlated; the literature seems to act that way.

    I think that there was another study that found that French women had a better sense of how (not) fun is childcare than American women; and that they did less of it, enjoying it more, perhaps because they made decisions about it based on more accurate beliefs. It suggests to me that culture is just causing American women to lie about how their children make them feel. But propaganda may cause real happiness.

  • Grant

    I would like to see some discussion of the housing bubble and bailout plans. Specifically, the times when the government intervenes into the market price mechanism; this time its saying a bunch of mortgage-backed securities are undervalued by the market.

    Typically we rely on markets to set prices, knowing they do so better than any other mechanism. Occasionally this mechanism seems to “break”. However, does that mean its rational to switch to another (political?) mechanism? Do we have a system that can accurately predict when markets go awry? If we do, do we have something that can out-perform distorted markets? Or is all of this just a “do something!” bias?

  • http://profile.typepad.com/aroneus Aron

    I wonder if it’s possible to intersect the interests of our prediction market guru and our FAI guru. How about the following hypothesis:

    a) The probability of any given human institution developing AI is highly correlated to its funding.
    b) The easiest case for technology investment is when that investment actually supports a business model directly.
    c) The stock market as a predictive market produces rewards to the most accurate predictors.
    d) Predicting future trends for economic or business issues requires considerable synthesis of high-level pattern and low-level pattern matching (e.g. not strictly narrow AI).
    e) Therefore, it is likely that our first AI’s may come from investment bank or hedge-fund equivalents.

    Now let us also consider that you have increasing amounts of capital put behind the decisions of quants and their systems TODAY. If there is a pattern that indicates ‘sell’ to a large number of lower-AI systems, it can be profitable to predict THAT and trigger it. This of course sets up a nicely recursive environment of minds simulating minds.

    Now perhaps its possible to construct an X cancels out X theory and the market works perfectly regardless of how esoteric it’s participants may get. Could this be akin to Eliezer’s pre-FAI thoughts?

    I find it plausible that before AI’s use us as tools, we will use them as tools to destroy ourselves.

  • Joe

    pdf23ds

    Do you think we should optimize for What-People-Want instead of happiness/pleasure? That seems like a viable alternative to What-Makes-People-Happy, but I don’t think I understand it. Let me think “out loud” here.

    There are some cases where people want things that are bad, because they’re wrong about something. Like, what if I wanted to stab myself because I thought it would feel great? Also, what happens when two people want mutually exclusive things? You have to measure the things against each other, and it seems like the way to decide which thing to do is to pick whichever one brings the most happiness.

    As for parents bringing up children, my understanding is that it might make the parents less happy but the good upbringing makes the children much more happy, for the rest of their lives. This still seems like it’s optimizing for net happiness.

  • Z. M. Davis

    Joe, you might want to cf. “Not for the Sake of Happiness (Alone)” and “Fake Utility Functions.” Also maybe CEV re mistaken beliefs leading to bad choices.

  • Tim Tyler

    So, the question is, are we optimizing for something other than happiness?

    According to most biologists, yes. As with most other organisms, the utility function for humans is well modelled as “expected number of grandchildren” – with the “expectation” being based on the assumption that we are in something like our ancestral environment.

    Is this a utility function a good match for hapiness? Probably not, at least according to this: “second and third children don’t add to parents’ happiness at all. In fact, these additional children seem to make mothers less happy than mothers with only one child”.

  • Ben Jones

    You have to measure the things against each other, and it seems like the way to decide which thing to do is to pick whichever one brings the most happiness.

    Hey, all you need now is a base-level formalisation of ‘happiness’ and we have a terminal value for our protean AI! So for the big prize, what do you mean by ‘the most happiness’, without resorting to terms like happy, fun or utility?

  • Doug S.

    I would say that it’s bad to become an opiate addict because negative feedback mechanisms within the brain limit the effectiveness of opiates to produce sustainable pleasure. In other words, you eventually lose your capability to experience pleasure from both opiates and the events that naturally trigger that particular reward system. In the long run, you end up less happy than if you had never started taking them.

  • mjgeddes

    Every one here seems to have ‘unloaded all their chips’ on Bayesian Induction. Continuing the poker game analogy, you could say that the AGI folks here have ‘gone all in’ on Bayes. Either they’ll strike the jackpot…or they’ll lose everything.

    Of course if you define intelligence in a sufficiently narrow way (ie optimally achieving goals), then you can fix your definition so its fully captured by Bayes (M.Hutter, S.Legg etc.). But that doesn’t mean that your conception of intelligence is neccessarily fully correct….

    Let me suggest an alternative definition of intelligence, (which blog readers may well all find highly peculiar at first):

    Intelligence is the ability to form effective representations of your own intentions/values – Marc Geddes

    Folks should keep an open mind about the current ‘Bayesian Induction’ craze. There could be further advances still in store…

  • anon

    Via Marginal Revolution, a research paper describes the behavior of people diagnosed with borderline personality disorder in a repeated trust game. The gist of it is that cooperation broke down because the subjects made no attempt to restore the counterpart’s trust, even as her willingness to lend deteriorated.

    Very interesting stuff, especially in light of the current credit crisis – how much of our economy is dependent on fragile cooperation mechanisms?

  • Daniel Yokomizo

    Testing Many-Worlds Quantum Theory By Measuring Pattern Convergence Rates

    http://arxivblog.com/?p=656
    http://arxiv.org/abs/0809.4422

    This was published recently, but it seems to have received very few discussion. The paper is really short (2 pages, without abstract and references it would be a single page) and claims to use Bayesian theory to provide a testable formula that should either prove or discard Many Worlds.

  • http://shagbark.livejournal.com Phil Goetz

    Testing Many-Worlds Quantum Theory By Measuring Pattern Convergence Rates

    http://arxiv.org/abs/0809.4422

    This was published recently, but it seems to have received very few discussion. The paper is really short (2 pages, without abstract and references it would be a single page) and claims to use Bayesian theory to provide a testable formula that should either prove or discard Many Worlds.

    I didn’t understand it, but I suspect Tipler may be trying to measure, in one world, how fast the pattern converges summed over many worlds, which I think would be a mistake.

    I also suspect Tipler’s ideas won’t be given as careful an examination as they would have if someone else had put them forward.

    Is there a physicist in the house?

  • Tim Tyler

    Non-Many-Worlds quantum mechanics, based on the Born Interpretation of the wave function, gives only relative frequencies asymptotically as the number of observations goes to infinity. In actual measurements, the Born frequencies are seen to gradually build up as the number of measurements increases, but standard theory gives no way to compute the rate of convergence.

    …seems to defeat his own thesis. The results of the test he proposes might well be what the MWI predicts – but he himself claims that other theories are vague on the issue – so what’s the point? The conventional wisdom is here.

  • http://profile.typepad.com/simon112 simon

    Phil: what he claims to be an “outline” of a proof really doesn’t say how he gets the result. It’s only that one paragraph, the following paragraphs introduce the terminology for eq (1), they aren’t part of the “outline”.

    He does say:
    “before measurements, identical copies of the observer exist in parallel universes”
    – (which is a not at all the conventional way to think of many worlds, but probably would not lead to an incorrect result in this case, although it would in an epr experiment)
    “a Bayesian probability density … is NOT a relative frequency”
    – (but by repeating the experiment over and over the relative frequency interpretation would come to the same result; Tipler doesn’t seem to realize that you can repeat the whole experiment, not just have repeated observations in one experiment)

    I suspect that Tipler does make the mistake you suggested he might have made, though.

    Anyway, he’s wrong in stating that there would be a difference between many worlds and Copenhagen in this case, and his result in eq(1) is clearly wrong for any interpretation.

  • Abigail

    I do not believe in Cryogenics as a viable way to continue my life.

    At the moment of death, my cell membranes begin to break down, in my brain, and information is lost. The process of freezing exacerbates the brain damage. It is not quite as bad as the ancient Egyptians drawing my brain out through my nose, but it is still – yes I will say impossible – to bring the frozen brain back to life.

    Eliezer, when you claim to believe in Cryogenics, are you making a deliberate error, so that your most star-struck fans (including me) cannot think you incapable of error, and must test your statements for ourselves? Or do you really believe in it?

  • Will Pearson

    A question, is Economics about predicting the economy?

  • http://www.thinkgene.com Kevin

    Do you think that poverty and/or extreme poverty will ever be eliminated in a world with scarcity?