Open Thread

This is our monthly place to discuss relevant topics that have not appeared in recent posts.

GD Star Rating
Tagged as:
Trackback URL:
  • wophugus

    Which historical figure is most likely to have been a time traveler?

    • The author of De Rebus Bellicus, of course. De Rebus Bellicus was a fourth-century treatise on war with ideas on weapons and tactics that seem like ordinary common sense today (e.g., paddle-wheel ships) but don’t fit in late antiquity at all.

    • fructose

      P.T. Barnum, definitely.

  • nyc561

    Re: wophugus, a few names come to mind – Jesus Christ, Leonardo Da Vinci, Nostradamus.

  • nyc561

    Re: wophugus, a few names come to mind – Jesus Christ, Leonardo Da Vinci, Nostradamus.

  • fructose

    Would emulations eventually hand-code an AGI? It seems likely to me that they would, if it was possible. It also seems likely that emulations would be able to do so very quickly, since they might run thousands of times faster than human brains, could work longer hours per day, and could be copied to work on multiple parts of the problem at once.

    If this is right, then isn’t hand-coded AI still the major concern even if emulations come first?

    • What benefit would they get from hand-coded AGI if there are already emulations running on Moore’s Law?

      • Aron

        I can’t believe that’s a serious question. The human mind is specialized to a narrow set of tasks, and simply not efficient or capable of handling problems outside of that set of tasks. Can you contemplate the geometry of a 12 dimensional cube? Can you do your job under extreme amounts of pressure? Can you silence your fear of death? Do you remember where your keys are?

      • fructose

        TGGP: The big advantage is that emulations would have a hard time self-modifying because they wouldn’t have comprehensible source code, since they are made without a fundamental understanding of the mechanism of general intelligence.

        Any hand-coded AGI would likely be vastly less complex in the beginning than a whole brain copied down to the molecular level. My understanding is that both Hanson and Yudkowsky believe that hand-coded AI is more likely to recursively self-improve than whole-brain emulations.

    • How does any entity tell if the entity it is generating via writing code more intelligent than it is and sane and not more intelligent and insane.

      I think this is an intractable problem of AGI, analogous to the halting problem.

      If you can understand the entity, it isn’t more intelligent than you are, if you can’t understand it, you can’t tell if it is sane or insane.

      • loqi

        How does any entity tell if the entity it is generating via writing code more intelligent than it is and sane and not more intelligent and insane.

        Good question, even if oddly punctuated as a statement.

        Wrt the halting problem, a very important difference is that we can formally prove the halting problem’s unsolvability.

        Here’s another high-level analogy. You can hand me a theorem (cf AGI) and a machine-checkable proof, and I can verify the theorem’s validity (cf sanity) without understanding the theorem’s content (cf internal mechanisms).

        Do you have a more technical rationale that you’d be willing to share?

      • I don’t, at least not yet.

        I think you need to add “in principle” to your statement. The time it would take to run such a proof might be excessive.

        I think that you need some kind of pattern recognition to recognize intelligence. When that intelligence is greater than your own, you don’t have a pattern to compare it to.

        Even when that intelligence is just different than your own, if your pattern recognition and cognition can’t emulate it, you will not be able to understand how it is thinking.

        I think this is a generic problem for humans and is the root of xenophobia. When we meet someone, we do a Turing Test to see if they are “human enough” to communicate with. If the error rate is too high, then xenophobia gets triggered via the uncanny valley. I think this is purely at the unconscious level. People just feel xenophobia toward those they are unable to emulate and understand. I blogged this in more detail.

      • mjgeddes

        I’ve cracked the problem. See my post in the thread ‘What is Em Death’.

        It’s logically impossible for an algorithm of complexity C to verify an algorithm of greater complexity F by any ordinary formal method. Certainly not by Bayesian methods.

        The solution is analogical inference. By making categorizations of the goals systems of C and F (which is equivalent to making analogies) you can beat Godel. This totally ovethrows the Bayesian paradigm. The notion of ‘probability’ has to be dispensed with and replaced with the more primitive concept of ‘similarity’, of which ‘probability’ is just a disguised special case. I’ve been telling folks this on Hanson’s blog for 5 years, I’m still awaiting some bright person to provide conclusive mathematical proof of my claims.

      • Yes, I think you have. The problem that humans have with appreciating this is that their “self-identity module” has such low fidelity that it will match to anything so long as it is “closely coupled”, i.e. co-inhabiting a computation substrate, a brain.

        Humans have to be this way so that they feel self-identical under essentially all conditions, even when there is great damage and change, even when computation is clouded by drugs, even after periods of unconsciousness, i.e. sleep.

  • I don’t know if he’s done this before on his blog, but I’d to hear his self-evaluation of how he has done in achieving various goals. Does he think he has made progress regarding healthcare, futarchy, cryonics/emulations etc or has it all amounted to folk activism? Or would he just say that almost everybody has a very low probability of achieving large changes? He could also evaluate the degree to which he has personally reduced biases in his own thinking.

  • There is a referendum about AV voting in the UK next week. People seem to be bored about this vote. Why do people argue about politics but do not really care about actually getting their opinion carried out and expressed? Less wrong has a thread about how AV gets more information from each voter

  • Wonks Anonymous

    Steve Landsburg says “My gripe is with the Universe. If I were running the Universe, there’d be some level of accomplishment that confers immunity from death, deterioration and obscurity. I’m not sure exactly where I’d set that bar, but I’m sure Dan Quillen would have cleared it.” Has he been sold on cryonics yet?

    • There is a level of accomplishment that does those things. It is called developing a system that confers immunity from death and developing a system that confers immunity from deterioration. I think that if one develops the first two, that the third will most likely follow.

  • nw

    It seems you can use power to do 1 of 4 things: create order, maintain order, create disorder, maintain disorder.

    Osama bin Laden used his power to create disorder, whereas president Obama used his power to create order, or justice, which is the same thing.

    Imposing disorder is cheaper than imposing order. 9/11 cost less than a million, but the cost of killing bin Laden was directly in the millions and maybe indirectly in the billions.

    Leaders create order, authorities maintain order, rebels create disorder, and I’m not sure there is a term for people that maintain disorder. Disorder for what its worth seems to be the natural state of man suggesting no need for an honorific term.

    Superficially it seems like agents and principals on the side of order are higher status than their peers on the side of disorder, but some rebels resonate in a way suggesting that creating disorder is higher status than maintaining order. Creating order is the highest status of all.

    • Very nice statement of the issues.

      I have a bumper sticker that relates to this; Sow Justice, Reap Peace.

      Those who would reap peace need to start by sowing justice. Those who want to profit from the opposite of justice, the disorder brought upon by those like OBL need to consider that.

      I think those who are opposing Obama’s attempts to create order need to consider that too.

      • Wonks Anonymous

        Where is the empirical evidence that justice leads to peace? Kim Jong Il’s North Korea is perhaps the most misruled state in our time. It has also been quite stable. Revolts tend to happen when a regime eases up. Similarly, compare the Israeli-Palestinian problem to the indigenous peoples of America or the Volkdeutsch of eastern europe. The former are still causing a problem because they were not completely devastated to the point of losing all hope.

      • Do you have an example where Justice has been tried and does not then ead to peace?

      • Emile

        Do you have an example where Justice has been tried and does not then ead to peace?

        South Africa?


    > It is said that sooner or later this fate will befall statues of Kim Il-sung, in 1945 a minor guerrilla commander who, with much Soviet backing, took power in North Korea and remained its absolute ruler until his death in 1994. However, this author is somewhat skeptical about the prospects: I would not be surprised to learn that some time in the 2030s it is trendy to keep a portrait of the long-deceased dictator in a North Korean house.

    History and national status and dictators are Far; death and suffering are Near?

  • Given the premises that: 1) scanning and emulation of a human mind based on a frozen brain is easier than repairing said brain, 2) most people find scanning and emulation repugnant or valueless but not nanomedical repairs, and 3) it is desirable and/or ethical to promote cryonics.

    Should cryonics promoters lie or attempt to self-deceive regarding how easy it is to scan and emulate versus repair a brain?

    The reason I ask is that I find myself tempted to downplay the scanning option more than I actually believe it deserves to be, because it is something that cryonics opponents tend to make out to be a weakness of cryonics. They will frequently say things like “I don’t believe I would still be alive after being scanned into a computerized substrate.” I find this difficult to counter, because it seems like a matter of opinion rather than fact as to whether they would survive in such a situation or not.

    It seems like I would be better off if I only discussed nanotech repairs (which includes diffusion-based nanomachines, stem cells, and genetically engineered microorganisms) as a possibility. Then they would have to attempt to argue for continuous consciousness as a criteria, or continuous metabolism — which are far weaker arguments and easier to refute in ways that make them look silly.

    Another aspect of this is that I have gotten the impression that people who believe in uploading usually place less emphasis on initial preservation quality. Those who attempt to maximize biological viability seem more motivated to do research and provide infrastructure for preventing ischemia and minimizing cryothermic damage. Thus if I want to maximize my chances of survival, it seems that I should be a hypocrite and encourage those around me to believe in a more fragile reanimation mechanism so that they will work harder to build the necessary infrastructure.

  • Anonymous

    An interesting study: “Not What We Say, But What We Do: A Neural Basis for Real Moral Decision-Making”.

    Quoting the abstract:
    “Here we show that hypothetical moral decisions do not approximate real moral action and that real moral decisions recruit distinct neural circuitry. Under both real and hypothetical conditions, we measured subjects’ responses when deciding between financial self benefit versus preventing physical harm to a confederate. In a behavioural study, we found that subjects dramatically prioritise their own financial benefit at the expense of harming others, keeping over three times as much for themselves in the real task as compared to the hypothetical. In two functional magnetic resonance studies, we showed that decisions made under hypothetical conditions activated neural networks identified in the existing literature, including the posterior cingulate cortex (PCC)—a region also implicated in imagination. However, decisions made during the real condition activated these networks as well as additional regions in the posterior and middle insular cortex (pINS-mINS)—areas essential in integrating affective body states to create a preliminary neural template of subjective feelings. We conclude that the pINS-mINS activity provides a rudimentary marker for real moral decisions.”

  • Interesting discussion of death, prolonging life, and hedonically driven self-deception:

    • But to those under the care of the life extension community it can be hell on earth

      Apparently he has us confused with someone else. The life extension community would like the right to cryopreserve those who prefer non-conscious existence over their present state, particularly when the state in question involves degradation of the brain. Those interested in denying this right are the same ones interested in denying the right to assisted suicide.

  • rapscallion

    Are humans rational utility maximizers? If not, shouldn’t we disregard all traditional welfare economics? If so, how can better outcomes be both possible and unrealized (i.e. how can inefficiency be observed)?

  • mjgeddes

    For the last time:

    Look at the concrete ontological ‘three-fold hinge’ that carves reality at the joints and keeps popping up everywhere: Objects (static things), Functions (dynamical processes) and Representations (signals). Objects are on the lowest level, functions are on the next level up, and representations are the highest level. (Signaling is always the highest level)

    Below, you can clearly see the logical analogy to the basic concrete three-fold ontological hinge:

    Predicate logic features static descriptions of logical relationships – equivalent to ‘logical particles’ or objects. See:

    The next level (which is a deeper level incorporating predicate logic as a special case) is the Bayesian level. Bayesian inference features correlations between entities that have predictive power, charting the externally visible dynamic evolution of things over time:

    The third and deepest level however, is the level of categorization (equivalent to analogical inference). Categorization refers to the process of grouping things into categories, in order to form efficient representations of reality:

    This basic ontological three-fold hinge, as I mentioned, crops up all over the place. It’s absolutely clear cut. Even folks with minimal reflective skills can’t miss it. It’s absolutely the very first thing a transhuman toddler would be aware of.