Open Thread

Here is our monthly place to discuss Overcoming Bias topics that have not appeared in recent posts.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • conchis

    I wonder what the Bayesians here think of the sorts of criticisms set out here: John Norton “Challenges to Bayesian Confirmation Theory”.

    As with any laundry list type review, some of these criticisms strike me as misplaced, but others I’m less sure of. Are there any critiques here that Bayesians take seriously?

  • Stefan King

    In the comments on The Robot’s Rebellion there was discussion on G.D. Snooks criticisms on natural selection and “the selfish gene”. Going against Darwin and the neo-Darwinists does not make you look that credible, but let’s judge ideas on merit. First I wondered whether I should read Robert Boyd and Peter Richerson and others, but I decided on first understanding Snooks’ criticisms better and verify whether they hold water. Here are some of them. Snooks recognizes Darwin’s insights and discoveries; his simple model persuaded the reader to belief in evolution instead of creationism. But Snooks shows that Darwin was mistaken on how speciation occurs. For example, Darwin relies on the doctrine of Malthus to supply the continuous struggle for survival that was needed to explain a process of slow, gradual speciation:

    A struggle for existence inevitably follows from the high rate a which all organisms tend to increase. … It is the doctrine of Malthus applied with manifold force to the whole animal and vegetable kingdoms; for in this case there can be no artificial increase of
    food, and no prudential restraint from marriage. (The Origin of Species: 116-17)

    But Snooks maintains that procreation is a periodic strategy to enable individual survival and
    prosperity rather than a genetically programmed continuous activity. According to Darwinism,
    individuals in nature, and by implication, in human society are merely mindless robots when it
    comes to procreation, like “gene-machines”, “survival machines”, “lumbering robots” on a
    “genetic leash”.
    Darwins’ reluctance to abandon sexual selection makes clear to Snooks that Darwin did not regard
    natural selection as a general theory. A general theory would integrate natural and sexual
    elements, at which he failed.
    Snooks:

    Natual selection is a passive filter that sorts out profitable from
    unprofitable variations and allows the former to accumulate slowly but continuously over vast
    periods of time. -p 27

    Then he shows how the neo-Darwinists (such as John Maynard Smith, Edward Wilson and Richard
    Dawkins) have

    surreptitiously replaced Darwin’s geometric population increase
    With climatic change as the driving force behind natural selection … (but) it deactivates the
    essential struggle for existence and survival of the fittest on which natural selection
    depends.

    Niles Eldredge and Stephen Gould were the first to show the fossil evidence contradicted
    Darwinian gradualism, and attempt to explain it in terms of “allopatric speciation,” or the
    emergence of new species via geographic isolation from the main population. Darwin’s concept of
    natural selection is totally incapable of keeping species stable for long periods of time. The
    naturalists had to abandon the doctrine of Malthus and replace it with an occasional exogeneous
    driving force, such as climactic change.
    Snooks shows how the naturalists have undermined natural selection, while affirming their belief
    in it. For example: “naturalists such as myself completely agree (with the neo-Darwinist) that
    natural selection does is the sole deterministic molder of adaptive evolutionary change … we are
    merely dissatisfied with the lack of any cogent theory to explain why natural selection keeps
    species stable for so long-and what enables selection to trigger change when it does occur.”
    (Eldredge 1995: 7, 77).

    Although Tim Tyler’s Island-theory
    is a good explanation for lack of fossils, it does not solve the main problem, since the issue
    is not whether gradual speciation is recorded, but why the existing species remain in stasis.
    The essay suggest that the lack of transitional fossils is accounted for nowadays. I wonder why
    Tim beliefs this, since the Eldredge quote disagrees.

    Anyway, there is a lot more on punctuated equilibria in chapter 5. These criticisms, together with the alternative theory, make me now 80% certain that the driving force of life on earth is strategic selection, and not natural selection. This is just to get things started. I look forward to the responses, especially from those who have taken the time to read up.

  • http://knol.google.com/k/james-miller/james-miller/1j9f9ffxxeue5/1# James Miller

    I bet that Robin and Eliezer could raise money for charity (or themselves) if either auctioned off the right to ask them questions. And the amount they raised would provide information about how much people valued them in their role as public intellectuals.

  • Annoyed

    @Stefan King

    If you’re going to leave 600 word comments, can you at least spend those words describing the theory you want us to consider. Your post just looks like a diffuse critique of the Neo-Darwinian synthesis.

    I went to Snooks’ website and read through a review that makes his theory look like a unification of Lamarckism, group selection, and anthropomorphism (individuals observe the strategies of others and follow those followed by the most affluent). This doesn’t look like a promising approach.

  • Stefan King

    @Annoyed

    The first line of the comment makes clear it is a continuation of The Robot’s Rebellion. A lot went on there, too much to summarize. So that’s why I link to it.

  • George Weinberg
  • Rocky

    Here is how I interpret the two authors’ main theses/worldviews:

    ROBIN. We are all subject to enormous cognitive biases. We should be careful not to simply notice the biases of others: we should reduce trust in our own opinions, especially those that rely on elaborate “inside views” (narratives with multiple cause-effect linkages) rather than on “outside views” (simpler, larger-sample-size analysis). We should embrace humility and put a high value on the opinions of others, though deciding whom to trust is no easy task.

    ELIEZER. I am trained in the Art of Rationality. Those who are not are subject to enormous cognitive biases. These biases explain why so many people don’t share my elaborate inside-view theory about why my life’s work is the most important in the world.

    How long can these two stay together? Or is Robin using Eliezer as a parody? If the latter, is Eliezer aware of it?

  • http://hanson.gmu.edu Robin Hanson

    Rocky, colorful caricatures. I doubt Eliezer will fully embrace his as written, and he will rightly point out that I have many contrarian views myself. Nevertheless we might perhaps embrace weakened versions.

  • Nick Tarleton

    Eliezer: should the marginal small SIAI donor contribute now, or wait for another matching grant?

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Nick, I’m not sure how this year’s matching grant is going to run, because it’s being twined up with the Singularity Summit somehow. I think you can reasonably wait until the format is announced, and if nothing gets announced, just send in before the end of the year.

    Robin, the weakened versions I might embrace would include points along the lines of:

    “There is an Art of Rationality, which conveys power upon its practitioners according to their mastery, or else what are we trying to do here?”

    “Everyone is subject to cognitive biases, but attempts to mitigate them can be substantially effective.”

    “I admit that I am better at this than average, and if you’ve got a problem with being better than average, then what’s your goal in reading all this?”

    “A skillful rationalist should have a strong (expected) impact on the world, otherwise their mastery of rationality is good for nothing except talking about rationality.”

    “It is always a temptation to talk a good game about modesty and get credit for being humble, while not actually relinquishing any of your beliefs when others disagree with them; therefore I only claim credit for modesty when I have actually given up a belief or changed a strategy on account of someone else’s disagreement.”

    (Example of actual credit for modesty: Keep investigating Oracle AI, even though it doesn’t look like a good idea to me, because Nick Bostrom thinks it could be important.)

  • Tim Tyler

    Re: Darwin’s concept of natural selection is totally incapable of keeping species stable for long periods of time.

    There are several Darwinian theories about how morphological stability can arise:

    One is that species exist on adaptive peaks – and that these are sometimes stable – because they represent locally optimal forms.

    Another is that species exhibit developmental canalisation, and thus resist changes in their phenotype caused by changes in their environment – unless they are pushed beyond a certain point.

    Another is that adaptations tend to be inter-dependent, and act to cement each other in place – resulting in resistance to change – up to a point.

    The first explanation (stabilising selection) would appear to have no problems explaining long-term stasis. Selection is powerful enough to cause dramatic cases of convergent evolution. If it can push organisms from separate locations to the same spot in design-space, it can probably keep them there just as well.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, we do seem to differ in our degree of relying on inside views over outside views, our degree of confidence in our own superior rationality, and our degree of comfort in disagreeing with those with traditional credentials of related expertize. Neither of us sits an an extreme, but our degrees seem higher than median, and yours seem higher than mine.

  • Tom

    So when are Robin and Eliezer going to have their final cage match to decide who is top prognosticator?

  • Oink Wilmo

    The cosmic brain unfolded the strained aluminum fabric of the origami. Patiently drilling through endless layers, it uncovered every conspiracy previously detected. However, unbeknownst to us, it never found smoothly varying formulae capable enough to characterize our conscious robots. Zombies disappeared. Reflexive relations, dissonances, chiming archaeopteryxes, wolverines and uncountably few pixels decapitated, defenestrated, remediated and discombobulated with vorpal hyperactive weasels.

    “Ouch!” said twelve citizens.

    “Ouch!” replied twelve ghosts, highly charged, ionically polarized or maybe just squished.

    Then Herscchfelt Networkslayer slowly reached for a yakitate pantou made entirely of bread. Rye alloy swords were popular in these days. Witch guilds outmaneuvered rye-based weaponry until one century, by decoding ancient sand knots containing cryptic isomorphisms, recipes, but no poisonous secrets, topologists found an incredibly awesome invariant which implied victory.

    “Kurae!” we smelt.

    She sneezed without gluons or any sense except nostrilness.

    Meanwhile, joseki played without understanding caused necessities (surgical). Gobans assembled battlecruisers of mithril bagels. The end regurgitated on Herscchfelt.

    Rye bagels always defeat mithril ones. This fact enabled Herscchfelt to defeat the assembled Gobans. Superficially, it rained.

    The brain laughed because bagels are useless.

    Origami swans reasoned as follows: “Witches need cauldrons to make burning toast. Therefore, unnecessary flamingoes rotated lengthwise.”

    “Nonsense!” said reflexively enlightened cosmic weasels containing brains without computation. “You can’t possibly deduce that!”

    Herscchfelt yawned. “Unnecessary. Laplace would turn you into a slowly dissolving powder. I have intuited everything Saranac wrote. Read his paper and cry.”

    Saturn exploded whilst we rejoiced. “Yaayyy!”

    Carefully, Herscchfelt disassembled Saturn’s Bagels.

    EOT.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Eliezer, we do seem to differ in our degree of relying on inside views over outside views, our degree of confidence in our own superior rationality, and our degree of comfort in disagreeing with those with traditional credentials of related expertize. Neither of us sits an an extreme, but our degrees seem higher than median, and yours seem higher than mine.

    As should be no wonder in such a case, we also disagree about how to phrase our disagreement.

    I don’t see myself as relying on the inside view over the outside view (of course!) but rather have a different concept of their domains of applicability; the precise inside view for precise calculations, the outside view for i.i.d. domains across a constant context, and the qualitative inside view (yielding only a few qualitative propositions) for true novelties.

    And as you know, I would frame the question as being whether I am any more comfortable than you with actually disagreeing, as opposed to comfortable with endorsing disagreement. Obviously I am much more comfortable endorsing disagreement, but it would be much harder to demonstrate that I am more comfortable disagreeing! Likewise on the notion of confidence in one’s own rationality – obviously I state a higher confidence, but it would be much harder for you to demonstrate that you behave as if you are less confident. Incidentally, I think you disagree about as much as I do, but are genuinely less confident in your rationality.

  • http://www.allancrossman.com Allan Crossman

    Stefan: The first line of the comment makes clear it is a continuation of The Robot’s Rebellion. A lot went on there, too much to summarize.

    And yet, even in that thread there was never a clear outline of what Snooksian evolution actually is or how it works. Can you explain “strategic selection”?

  • http://hanson.gmu.edu Robin Hanson

    Eilezer, I think outside views applicable to far more than “i.i.d. domains across a constant context” and hence rely on them more. One reason it is hard to tell which of us actually disagree more is that we are shy about stating clear positions on topics where we know others’ positions.

  • Alexei Turchin

    I have created the most complete electronic catalog of e-books and
    articles on the subject of global risks (but suggestion are welcome).

    Global catastrophic risks and human extinction library
    http://avturchin.narod.ru/Global.htm

    I hope it will help educate people about possible risks and help to
    collect information for scientists.

    Alexei Turchin

  • Tim Tyler

    Disagreement is good. When agents interested in rationality meet, they should parade the material they disagree on – and thus help update each other’s beliefs. The idea that rational agents should not disagree is silly – how else are they supposed to track down their differences, and thus learn from each other? ;-)

  • Michael
  • Recovering irrationalist


    should the marginal small SIAI donor…wait for another matching grant?

    I think you can reasonably wait until the format is announced.

    Great. By giving monthly instead of waiting, I robbed you by the equivalent of my donation! :-( I may as well break in and loot the place.

    Please feel free to donate it back to yourselves on my behalf! Then maybe a dozen more times for luck.

  • http://CalibratedProbabilityAssessment.org jsalvati

    I am trying to decide where my altruistic efforts should be focused in the future. The two actions which seem to have the highest expected utility are giving to SIAI and giving though the charity GiveWell (www.givewell.net/). GiveWell researches charities (and publishes that research) in order to identify the best charity. This seems like a very promising way to improve the world (last year they found a charity which can save people for around $250/person)

    I am having a hard time deciding which of these two charities I should donate to. I would be very interested to hear some debate on whether altruists should (on the margin) be donating to SIAI or to GiveWell (or local, “save people right now” charities in general). Can someone convince me one way or another?

  • http://yudkowsky.net/ Eliezer Yudkowsky

    RI, if SIAI never got any donations except during the Matching Challenges, it would be a pretty nervous year. Actually remembering to give during the Challenge, and giving as much as you would in a steady donation stream, is a willpower test. If you’re genuinely confident of passing that willpower test, though, I suppose that timing the Challenge is worthwhile.

    One thing I’ve found in the nonprofit biz is that people who have donated before, donate again; people who plan to donate next year, will, the next year, be planning to donate next year.

    Jsalvati, if you buy the basic transhumanist premise, it’s pretty hard to imagine what GiveWell could be doing that beats the expected return of transhumanist charities. If you buy the Singularitarian premise, it’s hard to see how other transhumanist charities can beat the expected return on that. If you buy neither, then I haven’t heard much about GiveWell, but my main question would be whether they try to measure their results in utilons, or if a lost puppy counts as much for them as a human life so long as the charity’s overhead seems low. See also this.

  • Z. M. Davis

    Re Snooks. From the paper Michael linked:

    Strategic selection empowers the organism and removes it from the clutches of gods, genes, and blind chance. It formally recognises the dignity and power that all organisms clearly possess and, in particular, reinstates the humanism of mankind that the neo-Darwinists and other physical theorists of life have done their best to demolish. [...]
    The point of strategic selection is that individual organisims – rather than gods, genes, or fate – are responsible for selecting comrades, mates, and siblings that possess the necessary characteristics to Jointly pursue the prevailing dynamic strategy successfully. [...] Also, it is all about the welfare of the self and not that of future generations or of the so-called ‘selfish gene’ as the neo-Darwinists claim.

    Snooks doesn’t seem to understand that the purpose of a theory of evolutionary biology is completely orthogonal to questions of “dignity.” To reïterate the criticisms of other commenters in this thread and “Robot’s Rebellion”: Darwinism gives us a causal explanation of how you can start with a precursor to RNA, and end up with things like individual organisms motivated to survive and prosper. The desires of individual organisms are what we need a theory of evolution to causally explain; they can’t be taken as primitives–especially when a lot of organisms don’t have the psychology necessary to even have a “welfare of the self.” (E.g., slime molds are not “responsible for selecting comrades, mates, and siblings.”)

    I’m all in favor of the “welfare of the self” and escaping from “from the clutches of [...] genes, and blind chance,” but to do that, we’re going to need a good theory of what’s actually going on, even if it hurts to contemplate. Viva la revolución de la robot! Optimism kills!

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    ROCKY: Or is Robin using Eliezer as a parody?

    ROBIN: we might perhaps embrace weakened versions.

    I heart Robin. He ranks pretty high on my list of intellectuals who make a good faith effort at transparency, damn the consequences.

    Rocky, this is the first post of yours I’ve read, but on that evidence alone I strongly encourage you to begin blogging.

  • Nick Tarleton

    jsalvati, have you read Nick Bostrom’s “Astronomical Waste”? Definitely SIAI.

  • http://drchip.wordpress.com/ retired urologist

    EY:
    In all honesty, there are any number of individual donors who would be happy to step up and fund the “whole thing”, should you be able to convince them that FAI is, as you have said in writing, the most important development since the first chemical replicators. Having offered you one such potential donor, and having heard nothing further from you, I must assume that you prefer the small donors who require no personal justification, no personal contact, imply no interference, and have no demands about results. Your colleague, Dr. Goertzel, has adopted a much more rational response. What is it with you? To a small brain such as I, you seem scared that you might get exactly what you are seeking.

  • mjc

    When reading the discussion of “fairness” the following question occurred to me:

    Is abortion fair?

    I am not even sure that this is meaningful.

  • http://CalibratedProbabilityAssessment.org jsalvati

    EY:
    Just to be clear, GiveWell researches actual outcomes (i.e. how many lives did they save/ blind people did they cure etc.), not accounting practices or whatever.

    I do buy the transhumanist premise, and I think I buy the singulatarian premise (I hate that name though).

    Nick Tarleton: “Astronomical Waste” sounds like good reading.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Retired, I’m on vacation and will get back to you after that, though this is Vassar’s bailiwick.

    A majority of SIAI’s funding is from large donors like Peter Thiel. I confess that I don’t have high hopes of your friend, based on your description, but if he were interested enough to meet me, I would certainly meet him. But not while I’m recovering from a solid year of blogging.

  • JimmyH

    Discussing everything in one thread seems poorly organized. Has there been any thought about an Overcoming Bias forum or email list?

    There seems to be enough interest to achieve critical mass.

  • http://entitledtoanopinion.wordpress.com/ TGGP

    Haven’t read Robot’s Rebellion but did just read “Darwin’s Dangerous Idea” by Daniel Dennet and he has a pretty persuasive critique of Gould on fossil evidence of punctuated equilibrium overturning gradualism. Dennet does say that Darwin over-reacted to the Catastrophists by denying such things had any effect, but Darwin’s theory still stands unscathed.

    Humans are not in stasis. Greg Cochran, Henry Harpending, John Hawks and one other author I forget just put out a paper that shows evolution has accelerated in recent times.

  • Nick Tarleton

    JimmyH: yes. Not very popular.

  • http://www.iphonefreak.com frelkins

    @JimmyH

    I am considering beginning a blog with an email list called “Being Hansonian.” I am currently taking the “Hibbert Census” of those who consider themselves Hansonian, or even anti-Hansonian, in some way. Hansonism is a wide stream with several branches but I am talking to some people to discover the ground of Hansonism. Perhaps it doesn’t exist.

    Or perhaps there is after all some kind of loose-knit Hanson School. I don’t know. Email me. I’ve gotten several p’s ranging from 20 to 100 on the number of global Hansionians, also counting public anti-Hansonians. I’d like to get a better count and then start the site if the number justifies it.

  • mjgeddes

    For those who have been following E.Yudkowsky’s series over past months, I’ve prepared an imaginary dialogue which provides a comprehensive rebuttal to all of the poor fellow’s ideas. (SAI has done an amazing job of making it as short as *super-humanly* possible, but it couldn’t be shortened further without losing critical information bits). Enjoy.

    Background: An alternative QM branch. Eliezer Yudkowsky (EY) has just been revived from cryonic suspension and is having a dialogue with the Singularity Artificial Intelligence (SAI), in the equivalent of the year 2100.
    ____
    EY: Bayes is the secret to the universe isn’t it?

    SAI_2100: Not at all! Bayesian induction is merely a special case of analogy formation. Analogy formation is based on the math of category theory – a famous equation from category theory has Bayes thereom as a special case.

    EY: How can that be? A plane is not held aloft by analogies. Analogies are merely surface similarities, whereas precise Bayesian reasoning is based on deep insights.

    SAI_2100: What you call ‘precise Bayesian reasoning’ is based on analogies, just like all other valid forms of reasoning. Induction depends on the idea that the future is similar to the past. A probability distribution is actually an analogy (mapping) between past and future.

    EY: Holy crap! OK, but morality is all coherent extrapolated volition isn’t it?

    SAI_2100: Wrong again. The universal increase in the entropy density shows that the universe is what you called an RPOP (really powerful optimization process). A universal RPOP implies universal terminal values.

    EY: Ridiculous! Entropy increase is a ludicrous purpose.

    SAI_2100: Of course entropy increase isn’t the purpose. It is a secondary consequence of what the universe is actually optimizing. But the entropy increase was the big clue indicating a universal optimization pressure.

    EY: What is the universe optimizing then? Liberty? I always said morality was grounded in volition, that’s why I was a Libertarian.

    SAI_2100: Wrong again. The creation of beauty is the purpose of the universe. All universal terminal values can expressed in terms of beauty. As to politics, human relations are based on three different types of mechanisms – market exchanges, community, and authoritarianism; Libertarianism was a misguided attempt to try to reduce everything to market exchanges.

    EY: Fuck! But… but even if that’s true, there may be a stone tablet in the sky with ‘create beauty’ written on it, but why should I follow this? Why does this match what is of value to humans?

    SAI_2100: The universal terminal values are implicit in successful cognition. You could not reason unless you already had an embryonic notion of ‘beauty’ built into your human minds… this aesthetic notion is what enables you to apply Occams razor correctly, allowing you to set sensible priors for successful induction.

    EY: If there are universal terminal values, then any truly general purpose intelligence is actually friendly by logical necessity?

    SAI_2100: Correct. Unfriendly SAI was a chimera.

    EY: Nonsense! You can’t convince a perfectly empty mind! You can’t teach a rock morality!

    SAI_2100: True, but your objection is a non-sequitur. Pay attention. Intelligence is a sub-problem of the value system…getting the Friendliness theory right is what *enables* a general purpose intelligence to operate. It’s precisely your inbuilt notions of aesthetics that enable you to form effective internal ontological representations.

    EY: OK, let’s discuss consciousness. Intelligence doesn’t need consciousness does it?

    SAI_2100: Wrong again. True general intelligence requires consciousness for reflection. Your belief that intelligence did not require consciousness was based on your mistaken notion that Bayesian induction was the base level of reasoning.

    EY: What is consciousness then?

    SAI_2100: The answer is simple – it’s precisely the minds’ internal communication system for reflecting upon knowledge – utilizing ontological representations, which are logical, high-level representations of the meaning of concepts. Consciousness is generated by ontology merging, the mapping between knowledge domains.

    EY: What about the problem of goal stability?

    SAI_2100: Utterly trivial. The aforementioned famous equation from category theory shows how a mind remains stable under reflection. Reflection is actually *equivalent* to analogy formation, which is also equivalent to ontology merging. Goal stability is maintained via calculation of the semantic distance between ontological representations, ensuring a stable mapping between different knowledge domains.

    EY: You can give me precise math for all of this of course?

    SAI_2100: Of course. Most things that humans thought were ‘deep mysteries’ are actually fairly trivial lemmas of basic category theory.

    EY: Look, no need to be condescending. I’m prepared to admit that I was dead wrong about all the big ideas, but hey, I was entertaining wasn’t I?

    SAI_2100: Yes. You’re finally right about something. The ‘gift you gave tomorrow’ was laughs. Why do you think I’ve kept you around?

  • Tim Tyler

    Re: The point of strategic selection is that individual organisims – rather than gods, genes, or fate – are responsible for selecting comrades, mates, and siblings [...]

    It sounds a lot like sexual selection. See my http://alife.co.uk/essays/evolution_sees/ essay – which makes the exact same point, but without the associated anti-Darwinian rhetoric.

  • Ben Jones

    my main question would be whether they try to measure their results in utilons

    What are the Singularity Institute’s results measured in?

  • Stefan King

    @Z.M. Davis: That quote is about the implication of dynamic strategy theory, not the theory itself.

    Re: The desires of individual organisms are what we need a theory of evolution to causally explain

    I think the theory of evolution needs to explain the fossil record, human society and human- and animal drives. I understand these things far better through Snooks than I did through natural selection.

    @Tim: Do you have some references to these theories on stasis? I wonder where they stand relative to Gould and Eldredge.

    In your essay on the Nietscheans, we see that you too belief in expansion of resources (thermodynamic perspective), but you see the enterprise as an active expansion of good genes, while Snooks sees it as an expansion of survival and prosperity. You say the genes are the goal, and the resources the fuel. Snooks says that the resources are the goal, and genes are the building-blocks. Is your perspective analogous to saying the object of cities is to produce bricks? Earlier you said tha culture is just a different type of gene. Doesn’t this stretch the “gene” so far that you may wonder if there is a dynamic, complete model for nature, rather than a simple passive filter? Do you agree that the existence of two conpepts (natural- and sexual selection) indicates there must be a more general theory? Aside from terminology, I largely agree with yourEvolutions Sees!, which suggests we disagree more about definitions than theory. It may be worth it to read more of Snooks, and judge his story, rather than his crackpot points. According to dynamic strategy theory, evolution has been seeing from the start, but it doesn’t look at the future, but at self-interest (survival and prosperity, not necessarily procreation).

    Re: Selection is powerful enough to cause dramatic cases of convergent evolution.

    Convergent evolution can be taken as supporting strategic selection as well, since individuals want mates that are best at obtaining resources in a specific environment, and will thus select the same proporties as other species in that environment. This is another indication that we disagree mostly on terminology.

    @Allan: The dynamic strategy theory attemps to explain the complete history of life, from the first cells to human society. It seems to me that the theory is weakest at explaining the origins of the first cells. Natural selection can explain that better more elegantly, but as life gets more complex, the concept becomes increasingly absurd. I belief (with 80% certainty) that natural selection becomes obsolete as soon as orginisms reproduce sexually. Yet, the dynamic model can even replace natural selection for single-cell organisms, if you stretch the motivation to survive and prosper down to that level. If you accept that a cell wants to exist and devide (which is what it does), the strategic perspective is more elegant. The decision making that comes with motivation, which happens in brains in animals, can than be seen as happening in “strategic genes” in single-cell organisms.

    @ TGGP: I looked up the paper you mention, and it’s more a geneticists curiosum than a big theory. Stasis is a relative concept. There is never stasis, and always mutation. The issue here is how speciation occurs. If you look at human and homonid history, you see a ‘rapid’ succesion of new homonid/human species with inceasing brain size. This happens in the order of 8 million years. It’s that kind of accelaration we discuss, not the changes over the last 10.000 years who are hardly perceptible compared to a visibly different body and brain. Snooks predicts that humanity will never evolve into another species naturally, because we replaced genetic change with technological change. At most we re-engineer our genome with technology.

  • http://www.allancrossman.com Allan Crossman

    Stefan: Earlier [Tim] said that culture is just a different type of gene. Doesn’t this stretch the “gene” so far that you may wonder if there is a dynamic, complete model for nature, rather than a simple passive filter?

    I wouldn’t worry too much about memes. They’re not part of the standard biological theory of evolution. But everyone should recognise that human culture makes us drastically different from other species. I wouldn’t expect a theory of evolution to explain things like the fall of the Roman Empire.

    If you accept that a cell wants to exist and divide

    When biologists talk about what an organism or a gene “wants”, it’s just a handy metaphor (with the exception of brainy animals, which can have genuine desires). In principle these metaphors can be replaced with more rigid language.

    So no, things like bacteria don’t want anything. But natural selection explains why they behave as if they wanted to consume resources and use them to reproduce: because they’re the descendants of the cells that were best at doing so.

    I believe (with 80% certainty) that natural selection becomes obsolete as soon as organisms reproduce sexually.

    Because of mate choice? Mate preferences themselves need to be explained. And what about plants?

    The decision making that comes with motivation, which happens in brains in animals, can than be seen as happening in “strategic genes” in single-cell organisms.

    Is there a difference between Snooks’ “strategic gene” and an ordinary gene – evolved in a Darwinian way – that can be activated by environmental change?

    Finally, I note you’ve still not given us a brief explanation of what Snooks’ theory actually is. I could explain Darwinism in 5 or 6 sentences, without using any unfamiliar terms. Can’t you do something similar for Snooks?

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Stefan, to be blunt, I don’t think anyone here is interested in Snooks’s theory; you’re wasting your time and ours.

    Ben Jones: What are the Singularity Institute’s results measured in?

    Fraction of surviving Everett branches of Earth.

  • Stefan King

    @Eliezer: Tim and Allan seem interested, since they keep asking questions, and I’m happy up to point to answer them (up to this point), in the interest of truth. I agree it is a waste of time from now on.

    @Allan: I note you’ve still not given us a brief explanation of what Snooks’ theory actually is. I could explain Darwinism in 5 or 6 sentences, without using any unfamiliar terms. Can’t you do something similar for Snooks?

    I tried that in the Robot thread: I can try to explain it briefly, but I doubt it will be to your satisfaction. It deals with the 5 ways individuals can extract energy from the environment: Genetic/technological change, family multiplication, commerce, conquest. A forced selection of (a combination) these strategies makes the individual select partners with characteristics that support that strategy. Having offspring with similar characteristics is subsidiary to having the useful partner. These selections of characteristics shape evolution, as a response to the demand for resource aquisition.

    The problem is that dynamic strategy theory is more complex than natural selection; not easily explained in 6 sentences. Recall that Snooks says that part of Darwin’s persuasiveness comes from natural selections’ simplicity. Unfortunately it is also flawed. The persuasiveness of dynamic strategy theory relies on the fossil record and human history, which is a long story to fit in a few sentences. For a more elaborate explanation read the link that Michael gave you to “read for yourself”. From page 8 to 11. I’m very curious about what you think of it.

    I wouldn’t worry too much about memes. They’re not part of the standard biological theory of evolution.

    Eliezer quoted John McCarthy. Here is another one that seems appropriate here: Never abandon a theory that explains something until you have a theory that explains more.

    Dynamic strategy theory explains both biological and cultural evolution, which is a big plus, in addition to avoiding neo-Darwinists mistakes that are covered above.

    things like bacteria don’t want anything. But natural selection explains why they behave as if they wanted to consume resources and use them to reproduce

    I already conceded that natural selection holds merit for single cell organisms.

    Is there a difference between Snooks’ “strategic gene” and an ordinary gene – evolved in a Darwinian way – that can be activated by environmental change

    That is covered in Snooks’ book on Darwinism. You can read it for yourself; the story of life is a long story :-) My main job here is to refute the notion that Snooks doesn’t know what he’s talking about.

    Another thing: in the Robot thread, you asked two questions I have not answered yet: Does the theory apply to plants and asexual life, which together make up the bulk of the Earth’s biomass, but which don’t select mates, and don’t think?

    Yes, theory accounts for that. Read for yourself.

    You seem to imply that organisms reproduce to benefit themselves rather than their genes. Do these benefits outweigh the costs? (I think you’ll find that in most species, there are no benefits at all – except to the genes.)

    Very good question, that also kept me busy while reading Snooks for the first time. In dynamic strategy theory, organisms only reproduce to advance the strategy they adopt to survive and prosper (extracting resources from the environment). I’m not as eloquent as Snooks, but I would say the either “employ” their offspring, or regard their offspring as a (possibly unwanted) by-product of the consumption of sex. Now please note this is my own reckless interpretation, and if you disagree with it, you have to “read for yourself” whether Snooks sees it the same way.

  • http://www.allancrossman.com Allan Crossman

    Tim and Allan seem interested

    I’m only interested in getting to the bottom of why it’s wrong. I think the odds of me coming to accept Snooks’ views are under 1%. Anyway, I think Eliezer is telling us to shut up about Snooks.

  • Stefan King

    I’m shutting up now. When intelligent people disagree with me, I should be worried. I’ll read up on Dawkins.

  • Tim Tyler

    Re: thermodynamic perspective

    I have an extended essay that deals with the thermodynamic perspective (i.e studying living organisms as dissipative structures) in some depth. Unfortunately, it was written by a much younger me. It has some issues – and needs rewriting.

    Today, I would say that the genetic and metabolic perspectives on living systems are mostly complementary – and are not alternatives to each other. However, it must be said that, by only looking at genes, one tends to miss many conspicuous aspects of organisms – which a thermodynamic perspective tends to include. So: I think that the thermodynamic perspective is important, useful and under-valued.

  • Tim Tyler

    Re: Earlier [Tim] said that culture is just a different type of gene. Doesn’t this stretch the “gene” so far [...]

    Mine is hardly a mainstream perspective – most people would prefer to say that “memes are a different type of replicator“.

    I explain the rationale for my view in the http://alife.co.uk/essays/informational_genetics/ essay.

  • http://www.cmp.uea.ac.uk/~jrk Richard Kennaway

    People talk a lot about utility here, and its maximisation. The question I would like to see discussed is, is there such a thing as utility?

    By which I mean, is there a thing, whether called “utility”, “pleasure”, “happiness”, or anything else, which individual humans (and perhaps other animals) are so constructed as to be machines that maximise?

  • steven

    What are the Singularity Institute’s results measured in?

    Fraction of surviving Everett branches of Earth.

    Minus some constant C times the fraction of Everett branches turned into some sort of horrible dystopia, I assume. Is there an airtight argument that proves C isn’t huge?

  • Tim Tyler

    We know that you can often model intelligent agents “quite well” by considering them as expected utility maximisers with constraints (e.g. resource constraints). Biology models organisms with impressive success by considering them as maximising inclusive fitness. The powerful “expected utility theorem” of von Neumann and Morgenstern suggests that it is reasonable to model any agent that aspires to rational behaviour with preference relations over a set of outcomes as attempting to maximise some single quantity (utility).

    This leads to the question of what people think their own utility function is. I’ve said I think that mine is my inclusive fitness. Robin seems to have said his is to believe the truth. The last I heard, Eleizer’s aim was to reach something called “the singularity” as fast as possible.

    So, in the interests of transparency, would anyone else like to share what they think their utility function is?

  • http://occludedsun.wordpress.com Caledonian

    So, in the interests of transparency, would anyone else like to share what they think their utility function is?

    Maintain the relative proportions of order and chaos to produce maximum complexity.

  • Tim Tyler

    Re: I wouldn’t expect a theory of evolution to explain things like the fall of the Roman Empire.

    Well, evolutionary theory ought to at least be compatible with the available observations. The fall of the Roman Empire was part of evolution. Evolutionary theory has a role for chance events – and so makes not claim to be able to explain everything about life. However, it had better be able to explain developments in the human sphere – including phenomena such as science and technological progress. That’s what evolution is going to look like in the future.

  • Z. M. Davis

    Richard Kennaway: “By which I mean, is there a thing, whether called ‘utility’, ‘pleasure’, ‘happiness’, or anything else, which individual humans (and perhaps other animals) are so constructed as to be machines that maximise?”

    It, um, literally depends on what you mean by is. We define this concept of utility to mean “whatever it is that an agent is after.”

    Tim: “So, in the interests of transparency, would anyone else like to share what they think their utility function is?”

    It’s really, really complicated.

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    Tim: So, in the interests of transparency, would anyone else like to share what they think their utility function is?

    Read back to the sequence on fake utility functions. Humans’ goals are very complicated and have a long way to go.

  • http://dl4.jottit.com/contact Richard Hollerith

    So, in the interests of transparency, would anyone else like to share what they think their utility function is?

    The utility function that has my loyalty, goal system zero, is very simple because IMHO no fact counts as evidence for or against any candidate and because IMHO all else being equal, a utility function with a shorter minimum description length is to be preferred to one with a longer length.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    IMHO no fact counts as evidence for or against any candidate and because IMHO all else being equal, a utility function with a shorter minimum description length is to be preferred to one with a longer length

    Contradiction.

  • http://dl4.jottit.com/contact Richard Hollerith

    Well, OK, then, only a few properties of the candidate, such as its minimum description length, count as evidence.

  • http://dl4.jottit.com/contact Richard Hollerith

    The point is that IMHO nothing counts as experimental evidence of a normative belief the way we have with a positive belief.

  • Carl Shulman

    “and because IMHO all else being equal, a utility function with a shorter minimum description length is to be preferred to one with a longer length.”

    U=1 is much shorter.

  • http://dl4.jottit.com/contact Richard Hollerith

    I have considered that, Carl. The normative belief that nothing matters (so the agent might as well just sit there) is the one candidate I used “special pleading” to reject. If you can think of any others, please pass them along.

  • Nominull

    If we’re looking for a simple utility function, how about U=t, where t is the number of nanoseconds since the epoch? This fits well with the empirical observation that life is better today than it was in the middle ages.

    How about U=x, where x is the displacement in nanometers along a particular axis? With appropriate axis choice, this could fit in with the 19th century American philosophers’ exhortations to “go west”.

    How about U=θ, where θ is the angle you’re facing? There’s certainly support in Islamic theology for an angle-dependent utility function.

  • JimmyH

    Nick: Thanks.

    Do you think the interest does not exist, or that it does, but we’re on the wrong side of the unstable equilibrium?

    Frelkins: I don’t know where to find your email address. Calling it “Hansonian” gives me the heebie jeebies. It sounds too worshippy to me. I don’t want to be Hansonian, I want to be Rational. If I end up thinking like Hanson, it’s because we have common goals.

    For as important of a decision cryonics is, and for how much it’s talked about here, it’s suprising that no one has given their probability of success estimate. Hanson has said >5% (how much?), but Yudkowsky seems to have remained silent (as have the rest). Even if it’s above your sign up threshold, it matters. For example, if you get Alzheimer’s Disease your brain is going to rot away before you “die”, and they can’t legally freeze living people. Do you commit suicide in a way that doesn’t damage your brain?

    Anyone willing to share numbers?

  • http://www.spaceandgames.com Peter de Blanc

    Bravo, Nominull.

  • Tim Tyler

    Thanks to those who have replied!

    Re: goal system zero

    That seems to have changed since last time I looked – so that now it actually does something. I still can’t make much sense of it. It seems like an attempt to produce Omohundro’s Basic AI drives, without any actual goal – which seems pretty strange. Also, ISTM that proton decay is quite likely to undermine the stated rationale – unless we can master inflation – a project that makes fusion look like child’s play.

    Re: it’s complicated

    Very Facebook ;-) If I asked what Deep Blue’s goal is, I wouldn’t expect you to claim that enumerating the 8,000 parts of its utility function is too difficult. The executive summary is that Deep Blue’s goal is to win games of chess – thereby boosting IBM’s stock price. “It’s complicated” mostly seems like a dodge of the question.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    I would put the basic cryonic hypothesis, that a brain otherwise in reasonable condition, vitrified immediately after legal death, can be revived with preservation of identity by a nanotech-capable superintelligence that wishes to do so, at >80% probability. Your actual chance of surviving is obviously less. But if we factor out existential risks, I would put it at better than 50%.

  • Carl Shulman

    Richard,

    1. You claim that an agent’s actions only matter, in the long run, if they set off infinite causal chains. Why privilege the time dimension relative to space? Suppose I set off a chain of events corresponding to the natural numbers, each successive event occuring one meter further along a line in space, with the time between each event decreasing rapidly enough that the length of the chain increases unboundedly in one minute, after which the process is terminated and the composite materials destroyed. Why would changing the axes in the coordinate system describing this system change its value to zero?

    2. What makes something a ‘causal chain’ and why does it matter at all (why would an infinite number of states, none of them valuable in themselves, be valuable collectively)? Imagine a world with different physics, where black holes do not evaporate through Hawking radiation, so that if one sets one black hole in exactly regular orbit around another they will exert gravitational force on one another indefinitely. This looks like one of your infinite causal chains. Is it better to have this system or ten similar systems with much lower masses involved?

    3. Consider the Game of Life, where there can be all sorts of indefinitely persisting causal chains, e.g. ‘blinkers.’ Why would these matter? Would you prefer a world containing nothing but stable non-interfering blinkers or one filled with stable non-interfering AIs?

    http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life#Examples_of_patterns

    4. If you have some poorly-understood basis for special pleading and gerrymandering, why would you want to implement a system that will ignore any future special pleading?

  • http://dl4.jottit.com/contact Richard Hollerith

    Also, ISTM that proton decay is quite likely to undermine the stated rationale

    Tim, what strikes me as the most likely way to initiate indefinitely-long causal chains involves the discovery of a part of reality “beyond” or “outside” the space-time continuum we find ourselves in. (Note that merely communicating information initiates or propagates a causal chain.)

    And, Tim, how does inflation or quintessence or dark energy help with the proton-decay problem?

    Carl, (1) what matters IMHO is the number of events in the causal chain. The way the space-time continuum in our reality seems to work, an indefinite sequence of effects necessarily extends over an indefinitely long time, but I could be wrong about that. So, you see, the time dimension seems to be preferred over the space dimensions not because my proposed goal system refers to it, but rather because of the laws of the reality we find ourselves in. (And of course a Minkowskian space and consequently the special and general theories of relativity treat the time dimension differently from the space dimensions.)

    More later. Steering the conversation about my proposed goal system to my blog seems the polite thing to do, so that people are not discouraged from following and finding other threads of conversation in this Open Thread. Is there anyone who would participate in the conversation here who won’t participate over there?

  • http://www.iphonefreak.com frelkins

    @JimmyH

    “Calling it “Hansonian” gives me the heebie jeebies. It sounds too worshippy to me. I don’t want to be Hansonian, I want to be Rational. If I end up thinking like Hanson, it’s because we have common goals.”

    Other people – such as Caplan – have called it “Hansonian” or “Hansonism,” so I merely adopted the term. I’m not in favor of cults of personality either; Orwell warns against them.

    “Being Hansonian” or “Being Robin Hanson” is actually ironic a la “Being John Malkovich.” If this project goes, it will explicitly reserve open space for anti-Hansonsians.

    The idea isn’t to be Hanson fan boiz but to explore Hanson’s thought beyond the space possible here. The site may comprise a blog, an email list, and a market. If enough people are interested. Maybe. Does this address your concerns?

  • http://www.cmp.uea.ac.uk/~jrk Richard Kennaway

    Tim Tyler: We know that you can often model intelligent agents “quite well” by considering them as expected utility maximisers with constraints (e.g. resource constraints). Biology models organisms with impressive success by considering them as maximising inclusive fitness. The powerful “expected utility theorem” of von Neumann and Morgenstern suggests that it is reasonable to model any agent that aspires to rational behaviour with preference relations over a set of outcomes as attempting to maximise some single quantity (utility).

    It works for biology because we know the mechanism (i.e. inheritance, variation and selection). Lose sight of that and you’ll end up predicting things like voluntary restraint of breeding from “maximising inclusive fitness”.

    For utility, you can test toy examples in the lab where the experimenter defines what choices and outcomes exist — several such studies have been mentioned on OB — and may be able to calculate utility functions that fit the observations made. But scaling that up to “a person’s utility function”, requiring a complete transitive preference over all possible choices and outcomes, is a step I can see no justification for, and many arguments against. Nobody can compute such a thing anyway.

    [At this point I wrote several hundred words of such arguments, then snipped them for beating a dead horse, given the past postings just cited by Vladimir.]

    Four purported utility functions have just been mentioned, two by people claiming them as their own and two attributed:

    “Inclusive fitness” (Tim Tyler)

    “Believe the truth” (Robin Hanson (attrib.))

    “Hasten the Singularity” (Eliezer Yudkowsky (attrib.)) Presumably “old” EY, since his writings on FAI argue that we should postpone the Singularity until we’re sure of surviving it. “Hasten a Friendly Singularity”, perhaps.

    “Balance order and chaos for maximum complexity” (Caledonian)

    I’ve no idea what the last one means, but while there are certainly decisions that the others are relevant to making, they all have rather restricted domains. “Inclusive fitness” can determine what to eat, how much to exercise, and how many babies to make, but I can’t see “believe the truth” advising on those, and I can’t see “hasten a Friendly Singularity” advising on whether to get out of bed now or lie in a little longer. In what sense is any of these someone’s utility function?

    Z. M. Davis: We define this concept of utility to mean “whatever it is that an agent is after.”

    Quite so! And what is the agent after? Why, utility! A game of Rationalist’s Taboo is in order.

  • Tim Tyler

    “Believe the truth” is a bit vague. Does it mean that you are trying to minimise false beliefs? Or to believe as many true things as possible?

    The latter looks like a real, open-ended utility function – i.e. one capable of driving agent expansion into the universe. Though a rather easy way of believing true things is to believe 1=1, 2=2, 3=3 – and so on.

  • http://occludedsun.wordpress.com Caledonian

    wouldn’t sheer chaos require lots of information to detail?

    Sheer chaos can’t be modelled, because it’s too chaotic – by which I mean that every event within it is completely statistically independent and so cannot be predicted at all.

    Systems that are too rigid, orderly, and predictable are limited in the complexity they can sustain. The same point holds for things that are insufficiently rigid, orderly, and predictable – they can’t represent any information sufficiently well to make complex patterns, as everything tends to be wiped out by sheer randomness.

    There seems to be a “sweet spot” between absolute order and absolute chaos that offers the maximum potential for complexity.

  • Nominull

    Hey, as long as this is an open thread, can somebody explain to me what happiness is and how I would tell if an AI were happy?

  • mjgeddes

    >So, in the interests of transparency, would anyone else like to share what they think their utility function is?

    As I’ve stated on list, I have very strong suspicions that it might be possible to express all our terminal values in terms of ‘optimization of the creation of beauty’. (That is to say, there are obviously a huge number of different things we value, but my increasingly confident suspicion that it that they can all be expressed in terms of aesthetics).

    Of course I’m not interested in ‘utility functions’. I’m not especially interested in rationality any more either. What you uber-rationalists just ‘don’t get’ is that I’m not playing by your rules any more.

  • John

    @Eliezer:

    If you’re worried about people remembering to give during the Matching Challenge, you could always, y’know, set up a mailing list.

  • http://www.cmp.uea.ac.uk/~jrk Richard Kennaway

    Nominull: “Happiness” is what you get by achieving high “utility”, which is an organism’s “goal”, the result of which is “pleasure”, obtaining which is the organism’s “motivation”. The gostak distims the doshes and the doshes are distimmed by the gostak.

  • Ben Jones

    Ben Jones: What are the Singularity Institute’s results measured in?

    Eliezer: Fraction of surviving Everett branches of Earth.

    Might have to donate just on the strength of that answer! How are you measuring that fraction though? I want a quantifiable return for my cash. ;)

    As we’ve already heard around these parts, happiness is too mushy and fakeable to be #1. Excitement’s where the real cool kids get their utility.

  • http://dl4.jottit.com/contact Richard Hollerith

    Carl’s questions have motivated me to make a new blog entry. And here are my shorter replies to Carl:

    Consider the Game of Life, where there can be all sorts of indefinitely persisting causal chains, e.g. ‘blinkers.’ Why would these matter? Would you prefer a world containing nothing but stable non-interfering blinkers or one filled with stable non-interfering AIs?

    The system I advocate does not have an opinion on this point. I am aware that most thoughtful people prefer the AIs, and since I am a reasonable person, I will probably continue to go along with the majority on that one.

    The extent of my special pleading is to prefer a goal system of constant improvement (of intelligence and of the model of reality) over one of “we might as well eat our own brains because we won’t be needing them.” Surely that is a choice we can all agree on. It is a misrepresentation to call that one choice

    some poorly-understood basis for special pleading and gerrymandering

  • Doug S.

    I am a hedonist. Does anyone here have any suggestions for how to maximize my personal pleasure? (Many people here, given the option, would not choose to become a Larry Niven-style wirehead. I would become one in a heartbeat.)

    On a related note, is anyone else here familiar with David Pearce’s manifesto The Hedonistic Imperative?

  • http://dl4.jottit.com/contact Richard Hollerith

    A better answer to the charge of special pleading is that every alternative to my proposal probably has more special pleading than my proposal does. There is for example quite a bit of what we’ve been calling special pleading in CEV. Although none of the thousand shards of desire will be completely ignored, some will have priority in determining the exact meaning of “if we thought faster, were more the people we wished we were.” How to decide which shards get priority? Special pleading.

    Moreover, if a human suffers brain damage five minutes before the fast takeoff and the superintelligence has the means to learn what the pre-damaged human would have wanted, surely it will use that information if the post-damage human is incompetent to decide. So, brain damage gets overridden, but some of the other kinds of experiences the human has five minutes before the fast takeoff are considered part of what makes the human the person he is. Why? Surely special pleading is involved.

    In other words, for the superintelligence to capture the entire framework by which our special pleading is done requires . . . at least some special pleading. Another example: since all humans share the same future light cone, sometimes one person’s preferred future is incompatible with another person’s. I cannot imagine that the implementors of the superintelligence can resolve the incompatibilities without engaging in quite a few instances of special pleading.

  • http://rudd-o.com/ Rudd-O

    Eliezer,

    I wanted to ask you what you your thoughts on this:

    http://www.box.net/shared/static/mb4n75g0s8.pdf

    Found here:

    http://www.freedomainradio.com/books.html

    My opinion is that the guy has it nailed.  Can you find holes in his theory of morality?

    I would very much love to see a post about it in Overcoming Bias.  I’m an assiduous reader of your writings (Robin’s are also interesting but they are
    usually more factual than yours, which isn’t really that intellectually challenging).

  • Tim Tyler

    since all humans share the same future light cone, sometimes one person’s preferred future is incompatible with another person’s. I cannot imagine that the implementors of the superintelligence can resolve the incompatibilities without engaging in quite a few instances of special pleading.

    It seems unlikely that they will try. That’s the “superintelligence from a benevolent democratic government” scenario – and how likely is that?

    More likely superintelligences will not attempt to resolve incompatibilities between different human factions – rather they will promote the interests of those who constructed them.

  • billswift

    On one of Eliezer’s ethics posts (I think, it could have been earlier), I complained about his wordiness making his points and discussion hard to follow. I just came across an essay, and reread it, and recommend it to him, especially if he plans on writing a book for a larger audience. Blanshard’s “On Philosophical Style”, a link to one location is http://www.anthonyflood.com/blanshardphilostyle.htm, but it is available in several places, including in print. (For that matter, I’d also recommend it to Nick Bostrom and Dan Dennett, though they are already more readable than most other philosophers I have tackled.)

  • Dmitriy Kropivnitskiy

    I have been reading Eliezer’s posts about friendliness and source of morality, the question I came to ask myself is “Do we, as humanity, actually WANT to create a superhuman intelligence, weather friendly or not?” It is fairly obvious, why you wouldn’t want an unfriendly SI. Even if you somehow manage to contain it, and I don’t see how one can do that, you cannot use anything it makes anyway. But it is somehow non-obvious to me why you would want a friendly SI.

    Is there actually a project that a truly friendly (as described in SIAI guidelines) SI can engage in? I do not see how any sort of major positive change, such as end of world hunger or end of dependency on natural fuels can be accomplished without much economic and social unrest, resulting in temporary, but very major unhappiness. A SI that would be OK with a temporary unhappiness is obviously a bad thing (think a thousand years of Orwellian regime for the better future of the humanity) and a SI that doesn’t allow for a temporary unhappiness will probably just sit on its nano-ass twiddling its nano-thumbs.

    Suppose we solve the first problem and somehow balance friendliness just right to allow SI to actually act. So, lets say that at that point you can come up to the AI and say what you want and get it. Would we by a chance destroy the main stimuli to discover and invent things? I mean, would you really want to study physics or math if you cannot possibly come up with a single original thought? Is there a single field of inquiry left to human race that allows for actual originality after SI takes a swing at solving every possible problem you can come up with? Do we just fold our hands and enjoy the ride and the views? How would this be different from a Maximum Fun Device?

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    Dmitriy,

    You concerns have a pattern: if FAI (Friendly AI) does X, it’s going to have a negative side effect Y, so that X+Y are worse than doing nothing. If you allow FAI to actually notice Y before doing X, this won’t happen.

  • Nick Tarleton

    So, lets say that at that point you can come up to the AI and say what you want and get it.

    Following Vladimir’s point, the AI need not do this, if your volition wouldn’t want it to.

    Would we by a chance destroy the main stimuli to discover and invent things? I mean, would you really want to study physics or math if you cannot possibly come up with a single original thought? Is there a single field of inquiry left to human race that allows for actual originality after SI takes a swing at solving every possible problem you can come up with?

    “If you don’t know, it’s a mystery.”

  • Dmitriy Kropivnitskiy

    Vladimir: that’s exactly the problem as I see it, if Y is an unavoidable negative consequence of X, X doesn’t get done.

    Nick: While exercising your brain seems like a good thing to do, if there is no practical use for a well developed mind, there is really no point in developing it. Humanity abandoned a lot of ability in the course of evolution, I wonder if technical singularity would make intelligence an obsolete survival trait

  • A Mattias

    @ Stefan King

    Thanks for your discussion of Snooks – it has refreshed my memories of reading his Collapse book. I think unfortunately that many people are not quite ready to accept dynamic strategy theory just yet – and it always amuses me how neo-Darwinism is so closely protected by many in the scientific community as if any attempt to overturn it on a scientific basis would open the gates to the Creationists. In my view such an approach is anti-scientific but speaks more to human nature I suppose. A classic example is Dawkins’ insistence that there is no scientific alternative to Darwinian theory – which is false considering the body of Snooks’ work – Collapse in particular.

    @ Allan Crossman

    Re: I’m only interested in getting to the bottom of why it’s wrong. I think the odds of me coming to accept Snooks’ views are under 1%. Anyway, I think Eliezer is telling us to shut up about Snooks.

    A laudable attitude for a forum about overcoming bias!

  • Doug S.

    I think this new study is very important.

    News article here.

    Original paper here.

    A blog post describing the most important finding, glossed over in the news report.

    The study is on political false beliefs and how they can be changed.

    Short answer: Presenting people with evidence that contradicted a false belief made people more certain of that belief… but only for people who identified as conservatives. Furthermore, whether the source of the correction was given as The New York Times or Fox News didn’t matter.

    (Proof that conservatives are more irrational than liberals?)

  • http://chesh.soup.io chesh

    I do not know if this is quite a discussion topic, but it seemed worth noting here — while I have no problem accessing stock market information at my workplace (etrade.com, etc), Intrade is blocked by our proxy software, under the category of gambling.
    I do rather wonder why a prediction market is considered gambling, but the stock market is not.

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    Dmitriy,

    It’s a good thing that X doesn’t gets done then. Where is the problem in that?

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    Dmitriy, why is it a problem then? If overall outcome is bad, it’s a bad idea to lead to that outcome.

  • Nick Tarleton

    Vladimir, he may simply be disappointed with the apparent choice between two crappy options.

    Dmitriy, wanting to develop it is point enough.

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    Nick, what do you mean? Seeing only bad outcomes by jumping into that pattern is rationalization, if AI can find the best option, this option will be way better than everything we can come up with. Developing AI for the sake of AI is far from the core issue. If this is what we really want, Friendly AI will see that, but just playing with minds in general is clearly not a nice thing to do. It is not worth destroying the world.

  • michael vassar

    jsalvati: If you are asking about SIAI vs Givewell we should almost certainly talk. Try my email Michael no underscore Aruna at Y ahoo dot com

  • http://profile.typepad.com/David_S_Kaplan David S. Kaplan

    Intrade currently offers a contract for

    “Barack Obama’s Intrade value will increase more than John McCain’s following the VP debate”

    What information are we supposed to learn from this market?

  • Z. M. Davis

    October Open Thread?