Open Thread

This is our monthly place to discuss relevant topics that have not appeared in recent posts.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • mjgeddes

    I just finished David Deutsch’s ‘The Beginning Of Infinity’ e-book on the Kindle.  Highly impressive. 

    I believe Deutsch blows Yudkowsky out of the water intellectually, and would even give Hanson and Bostrom a strong run for their money in debates. 

    The book is a very strong challenge to the Bayesian paradigm.  He argues that the true basis for rationality is explanation and error correction, which is more than mere statistical correlation – he makes it clear he doesn’t buy the inductive (i.e Bayesian) story of science.

    He thinks that the true basis for accelerating progress (‘the beginning of infinity’) is creativity, not ordinary ‘intelligence’ per se.  He also presents an original and powerful argument in favour of universal terminal values (namely universal aesthetics) in the chapter ‘Why are Flowers Beautiful?’ 

    Make no mistake.  This is a very stern challenge to the ‘Less Wrong’ world-view presented in the ‘Sequences’.   

    There are also insights into the many-world interpretation of qantum mechanics, clever debunkings of anthropic reasoning and new insights into the foundation of mathematics.  He also deals with artificial intelligence.  In short, the book is absolutely brilliant.

    • Ilya Shpitser

      Ordering people from best to worst is silly.  Explanation = causality, so I agree with Deutsch.  And furthermore, I think deriving explanations just from applying Bayes theorem is silly and doomed.

      • VV

         

        Explanation = causality

        Can you explain? No pun intended.

        And furthermore, I think deriving explanations just from applying Bayes theorem is silly and doomed.

        I think that in theory you could have universal priors over all the possible explanations of observations, but in practice you can’t.

      • Ilya Shpitser

        If you just rely on observations to update your universal prior, you have no reason to prefer one causal hypothesis over another, because causality cannot be inferred from observational data alone.

        Explanatory accounts are generally causal: “why do people get scurvy, why is the earth heating up, why did this program crash?”

      • http://juridicalcoherence.blogspot.com/ srdiamond

        If you just rely on observations to update your universal prior, you  have no reason to prefer one causal hypothesis over another, because causality cannot be inferred from observational data alone.

        I agree Bayes isn’t foundational, but I don’t understand this argument. Does it not rest on the observational equivalence of theories? If theories were observationally equivalent, wouldn’t it follow that there’s no basis for preferring one to the other epistemically — within an empiricist world view? What else would one use other than observations relative to plausibility heuristics?

        The reason that I think Bayes isn’t foundational is that it would have to be self-justifying–and it can’t, as a matter of logic, justify itself. You must assume with the always-unwarrated perfect confidence that Bayes theorem is valid to use it.

        I thought McGeddes was onto something with the idea that there are certain foundational concepts–philosophers may call them “intuitions.” (Although he seems to embrace some concepts that I would seek to abolish–such as the concept of an actual or completed infinity. [See, for example, my “Infinitesimals: Another argument against actual infinite sets” — http://tinyurl.com/b9kn4tb ]

      • Ilya Shpitser

        http://xkcd.com/925/

        Imagine the following two theories:
        1.) rain causes wet grass (R -> W)
        2.) wet grass causes rain (W -> R)

        Wet grass and rain are highly correlated.  Both theory 1 and theory 2 explain the correlation equally well.  Why do we prefer theory 1?  Because we know that if we take a garden hose and spray our lawn, that will not cause it to rain.  But this is not observational data — this is experimental data.  We can still use Bayes theorem to update based on experimental data, but we need correct representation and math to talk about combining observations and experiments to draw conclusions. Probability theory alone will not do it.

      • VV

         @186e5afdcd299bdbf39eb76f4fb23567:disqus

        Why do we prefer theory 1?  Because we know that if we take a garden hose and spray our lawn, that will not cause it to rain.

        I don’t think this is the core issue. Some events are known to cause multiple possible effects. One could disingenuously argue that wet grass causes rain, or garden hose spray, or both, with some probability distribution.

        In my understanding, causality generally refers to temporal ordering. Rain or garden hose spray happen before wet grass, therefore rain or spray cause wet grass, and not the other way round.

        Is reasoning about causality more fundamental than Bayesian inference? I think so. In order to have “priors” (before), “posteriors” (after) and updates, you need a notion of temporal ordering.

    • VV

       Nice suggestion, I’ll check it out.

    • Mitchell Porter

      Everything you mention sounds either superficial or wrong.

      • mjgeddes

         For
        interested readers, Yudkowsky’s meta-ethics is completely undermined by Deutsch
        giving a clear example of universal aesthetic values. See the Deutsch lecture
        “Why Are Flowers Beautiful?” which is free to view on YouTube.

        https://www.youtube.com/watch?v=gT7DFCF1Fn8

        The basic
        argument can be summarized as follows:

        “Deutsch
        observes that flowers are grown for insects and so they do not NEED to appeal
        to humans. Furthermore, while we aesthetically appreciate the sex organs of
        plants, fewer of us appreciate the beauty of their root systems. Roots are less
        beautiful than flowers. He suggests that there is a universal code for
        interspecies information signalling, and the flowers are part of this common
        code. What this code is based upon must be the objective, universal
        beauty.” (Source:  shkrobius
        livejournel)

        Deutsch deals
        with basic evolutionary counter-arguments and shows they don’t hold any water.

        The whole
        basis of Yudkowsky’s theory of rationality (Bayesianism) is also undercut by
        Deutsch, who demonstrates quite convincingly that mere statistical
        correlations/predictions just don’t constitute what science regards as an
        explanation, and that concept-free intelligence is an impossibility.

        “Currently one of
        the most influential versions of the ‘induction’ approach to AGI (and to the
        philosophy of science) is Bayesianism, unfairly
        named after the 18th-century mathematician Thomas Bayes, who was quite innocent
        of the mistake. The doctrine assumes that minds work by assigning probabilities
        to their ideas and modifying those probabilities in the light of experience as
        a way of choosing how to act. This is especially perverse when it comes to an
        AGI’s values — the moral and aesthetic
        ideas that inform its choices and intentions — for it allows only a
        behaviouristic model of them, in which values that are ‘rewarded’ by
        ‘experience’ are ‘reinforced’ and come to dominate behaviour while those that
        are ‘punished’ by ‘experience’ are extinguished. As I argued above, that
        behaviourist, input-output model is appropriate for
        most computer programming other than AGI, but hopeless for AGI.”  (Source: 
        ‘Creative Blocks’, Aeon Magazine)

         
         
         
         
         
         
          

      • Robert Wiblin

        ‘Flowers are beautiful’ as a justification for a necessary universal aesthetics seems very weak. There are lot of other reasons humans might find flowers beautiful (they were correlated with access to food in the ancestral environment for example). Or it could just be a spandrel.

      • mjgeddes

        Hi Robert,

        Deutsch is one of the smartest guys on the planet.  He wouldn’t be making the argument unless he could back it up.

        Deutsch does consider numerous alternative explanations and dispatches them.

        The argument that for instance ‘flowers are correlated with fruit’ (food) is a weak counter-argument.  Most flowers were NOT correlated with fruit or food for humans, yet we still find them attractive, further we find the same specific part of the flowers attractive as do insects.  This is simply inexplicable if the sense of beauty was all species-specific.

        I encourage you to read the book.

      • http://www.facebook.com/peterdjones63 Peter David Jones

         > Yudkowsky’s meta-ethics is

        …what??? AFIACT, know one knows what it is including Yudkowsky.

      • Robert Wiblin

        You’re right someone as smart as that couldn’t mean what I think they mean. I’ll think about reading it.

        But on flowers: it may not have been important to evolve the ability to distinguish flowers which were associated with particular foods. Easier and more flexible just to like flower shapes in general. For that matter a) not all flowers are beautiful b) even those that are I don’t find especially beautiful.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        > Yudkowsky’s meta-ethics i…what??? AFIACT, know one knows what it is including Yudkowsky.

        I know what they are; that he laboriously develops over some 23(?) lengthy posts a well-established view without apparently knowing that it is or offering any novel arguments for it doesn’t make the position obscure. He believes that to know what’s moral, we can only consult our intuitions (which he seems to think–at the object level–are utilitarian). He doesn’t address the basic arguments against intuitionism: 1) different persons’ intuitions vary and 2) people might subject their moral intuitions to rational critique.On another topic in response to a different poster–It may well be–it seems to me–that our moral sense ultimately reduces to aesthetics (as does our sense of truth). This is ultimately a psychological question rather than a meta-ethical one:  it doesn’t address the truth value of moral claims. But what seems implausible is that our sense of beauty is hardwired–as opposed to “prewired” and modifiable by experience, if unfolding in accordance with its prewired nature. What may be universal is the way the aesthetic sense develops in response to experience rather than the way it manifests.

      • http://www.facebook.com/peterdjones63 Peter David Jones

          @srdiamond , re Yudkowskian metaethics.
        You say you know what they are, then immediately run into the same interpretational problems as everyone else. Are good consequences inherently good, or only viewed as such by subjects? Does each subject have their own notion of the good or do they converge? etc etc.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        You say you know what they are, then immediately run into the same interpretational problems as everyone else. Are good consequences inherently good, or only viewed as such by subjects? Does each subject have their own notion of the good or do they converge?

        The answers are straightforward–not that I’d want to wade through Yudkowskian verbosity to prove it. :)  Good consequences are inherently good because being viewed as such by subjects is all we can mean by “good.” Our moral intuitions are all the same–that is, our moral conclusions converge. This per Yudkowsky.

        The problem is better expressed by saying you disagree with him than that you don’t understand him. (Not that I’m questioning your sincerity–the vast majority of people simply don’t understand moral theories they disagree with. [But it isn’t essentially the “opponent’s” fault that they don’t understand. — see my “The unity of comprehension and belief explains moralism and faith” http://tinyurl.com/cxjqxo9 ]) Different philosophical systems take different concepts as primitive. ( http://tinyurl.com/bx2ujj2 )The game is to attack their coherence or plausibility, not to refuse to understand them.

        Yudkowsky is as clear as he can be given such an excess of verbiage and incompetence at the art of omission. (See my “Construal-level theory, the endowment effect, and the art of omission” —  http://tinyurl.com/9sw54v8 ) His problem isn’t lack of comprehensibility but implausibility and, more importantly, readily discovered incoherence. Methodologically, his problem is a refusal (or inability) to construct any arguments or to deal with opponent arguments. 

        I do have some sympathy with greeting Yudkowsky’s ramblings by characterizing them as incomprehensible. After all, it’s the LW way! But it’s a bad way — they way of the cheap shot, or more charitably, ethical philistinism. ( http://tinyurl.com/6kamrjs )

      • VV

         @77122d260f450be1c517966fe193e3e3:disqus  Are there non-human animals that find flower attractive other than those which use them as a food source?

        And anyway, given what I know about insect nervous system and  behaviors (such as repeatedly bumping into lights), I doubt that they have any concept of ”beauty’. Insects seem to be very simple stimulus-response machines, not much more complex than Braitenberg veichles http://en.wikipedia.org/wiki/Braitenberg_vehicle

      • http://www.facebook.com/peterdjones63 Peter David Jones

        @f26939f398e5b2e21ea353b06370c426:disqus  re Yudowskian meaethics

        ”  Good consequences are inherently
        good because being viewed as such by subjects is all we can mean by
        “good.” ”

        If it takes a subject to dub or deem something good, then that this is NOT inherently good, in the sense that something inherently weighs 10 kilos. You rendition just contradicts the meaning of “inherent”.

        “Our moral intuitions are all the same–that is, our moral
        conclusions converge..”

        It is not clear that or how or why they would. It is also not clear that the output of an inutitiional blackbox would count as really good (euthypohro problemc) or whether it is meant to…or..

        “The problem is better expressed by saying you disagree with him than
        that you don’t understand him. ”

        No. The problem is that his writings aren’t coherent enough to form a theory that can be agreed with ot disagreed with. I’m not most people

        “-the vast majority of people simply don’t understand moral
        theories they disagree with”

        So I’ve noticed.

        “Yudkowsky is as clear as he can be given such an excess of verbiage and incompetence at the art of omission.”

        You mean he is clear but long-winded? But your short summary wasn’t clear either. See above.

  • JonLoldrup

    “The Chinese Room argument, devised by John Searle, is an argument
    against the possibility of true artificial intelligence. The argument
    centers on a thought experiment in which someone who knows only English
    sits alone in a room following English instructions for manipulating
    strings of Chinese characters, such that to those outside the room it
    appears as if someone in the room understands Chinese.”

    John Searle is right: neither the man, nor the scratchpad, nor the
    book of instructions that tells the man how to process the Chinese
    texts, understands Chinese. Neither do they in combine. What then
    constitutes the smartness it is to be able to emulate an understanding
    of the Chinese language?

    Let the man picks up the Chinese text and the book of instructions, and
    let him start working on the former using the instructions from the
    latter. When doing so, a process is started. This process implements the
    understand-chinese-phenomena, and it is this process that that is
    intelligent.

    This however does not mean that computers are bound to be inferior to
    humans, in regardness to the strength of achievable intelligence. The
    situation in the human brain is the same as the situation in the chinese
    room:

    The brain is just a piece of hardware. In itself it is not at all
    intelligent. Rather, it is the process that it facilitates, that is
    intelligent. It makes as little sense to say that brains are intelligent
    as it makes to say that a CPU is intelligent.

    Thus I conclude that the Chinese room argument gives no reason to
    assume that computers are fundamentally different from brains, in
    regards to what type of intelligence can occur (strong vs weak AI).
    Technically speaking, my argument implies that all intelligence
    (including human intelligence) is ‘weak AI’.

    Jon

    • Don Geddis

       “[Even] human intelligence is ‘weak AI'”

      Methinks that you have twisted the definitions of these terms so much, that they no longer have any useful meaning.

      You have concluded that computers can do anything that humans can do.  This is in strong opposition to Searle’s conclusion, so you don’t agree with him at all.

      • JonLoldrup

        Well I didn’t mean to say that Searle was right in *everything*. The
        central point of Searle’s chinese room argument is that computers can
        never come to understand anything – at best they will be able to emulate
        an understanding. He then goes on to say that this is different from
        human brains. He thinks that human brains actually understands
        something. And that is where my opinion differs from Searle’s opinion.
        A brain is just a physical object – it understands nothing. Just like
        he argued that computers understands nothing.

        It is the processes that the brains and the computers (CPU’s) facilitates that are the clever things.

      • VV

         ://en.wikipedia.org/wiki/Eliminativism

    • AspiringRationalist

      What does “understand” mean in this context?  Does the belief that an entity does or doesn’t “understand” something pay rent?

      • JonLoldrup

        If we adopt ‘Eliminative materialism’ it merely means ‘displays intelligent behaviour”. But I don’t think the exact definition of “understanding” is that important in this context. The central point of the chinese room argument is that a CPU can never understand anything (regardless of whether “understand” is defined reductionistically or not). My argument is that the same must be true for brains.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Nice argument re Searle.
         
        There’s a species of “rationalist” that thinks the request for immediate definition is always illuminating. (I think it might have originated around these parts with Luke Meuhlhauser’s disqusitions on morality.) [For a dissection of this definitional mania, see my “The meaning of existence: Lessons from Infinity” — http://tinyurl.com/bx2ujj2

    • Zaine

      Thus I conclude that the Chinese room argument gives no reason to
      assume that computers are fundamentally different from brains, in
      regards to what type of intelligence can occur (strong vs weak AI).

      I tried but can’t find flaw in your argument.  It suggests the functions themselves that output an AGI, as the brain outputs a ‘human person’, may have the potential to become intelligent if the AGI can realise on an operational level that it is a product of those functions.  If the AGI understands “[Binary code] effects function X on chip Y which leads to [binary code] … and thus I present text on a monitor,” then the AGI’s function’s themselves may become intelligent on a recursive level.

      I understand though the possibility of the above it currently a matter of debate (can a mind model itself?); however at the very least we know how a computer begins its operations, whereas we have no idea how the brain triggers the activation of the connection of neuronal pathways representative of a concept, idea, or responsible for coordinating action.

  • Rationalist

    I’d be interested to hear how Robin and Carl have updated their AI timelines in the last few years, e.g. since 2010.

    How much evidence do Watson, Stuxnet and Siri count as?

    Also of note is the expansion of drones and robots in the military (though I suspect that that comes as fairly little surprise to you)

    • VV

       Stuxnet?

    • Luke Muehlhauser

      Watson, Stuxnet, and Siri were “normal progress”; I doubt they caused updates for Robin or Carl.

      Eliezer and I pushed our timelines back a bit when Moore’s law stumbled and especially when the world economy slowed.

      Not sure about recent updates from Carl.

      • Rationalist

        As far as Siri goes, it is a concrete example of AI technology interacting directly with the consumer electronics market, a market which is massively competitive and innovative, as well as being worth serious $$$.  If, in the next 8 years we see competition between the big four over who has the best Siri clone, that would potentially pump the huge cash reserves of Google and Apple into AI research.

        As for Stuxnet, we now see that AI is interacting with national security in a way that it wasn’t in 2009. Well, at least not so blatantly. I remember comments on the Accelerating Future blog about how AGI systems would infiltrate computer systems in a highly autonomous way and have real-world destructive effects. Now that’s real.

        Arguments have been made about how both national and corporate competition will drive AI research whether we like it or not, so that relinquishing AI is not an option. They are sound arguments, but in 2009 they were also hypothetical ones. Now they are facts…

      • VV

         

        As for Stuxnet, we now see that AI is interacting with national security
        in a way that it wasn’t in 2009. Well, at least not so blatantly. I
        remember comments on the Accelerating Future blog about how AGI systems
        would infiltrate computer systems in a highly autonomous way and have
        real-world destructive effects. Now that’s real.

        I don’t see how Stuxnet can possibly fit any reasonable definition of AI. It’s just a piece of computer malware. Maybe it’s more complex than the usual ones, but it doesn’t seem in any way ‘intelligent’.

      • Rationalist

        I didn’t know about Moore’s Law “stumbling” recently. Source?

      • Luke Muehlhauser
      • VV

         Let me guess, your current estimate is 15 – 20 years in the future? :D

      • Luke Muehlhauser

        My median estimate is 2060, but my distribution is wide:
        http://intelligence.org/2013/05/15/when-will-ai-be-created/

  • Vaniver

    Gabe Newell gave a great talk at UT Austin recently, which has been posted online. Apparently, Valve has a prediction market for games in the works: http://youtu.be/t8QEOBgLBQU?t=49m9s

  • Anonymous

    Fuzzy survival: How a FAI may save you even if you die before the Singularity:

    – leave pictures of your physical appearance
    – leave voice recordings
    – leave a loudly expressed wish to achieve fuzzy resurrection by FAI
    – leave detailled descriptions of your most important opinions and memories
    – leave a detailled formal biography with documents about your whereabouts in the past
    – leave documentation of your social circle and its dynamic development, including a diary of your day-to-day experiences with other people
    – leave the best brain scans money can buy (repeat this when the tech gets better)
    – leave detailled medical records
    – leave your DNA

    After you are long dead, the FAI will know the internet’s archives by heart, know of your wish to be achieve fuzzy resurrection and piece your identity together from the documents you left. Memories will be replaced with fake memories from your descriptions and known world events, as much as possible. They will, of course, be made to feel authentic and internally consistent. Your personality profile and biology will not be perfecly replicated, but with astonishing approximation.

    You will have lost some of your identity, but not more than a stroke or a degenerative disease would take, and you will live in an era that has solved the longevity problem.

    • http://www.gwern.net/ gwern

       This is the old idea of beta-level or beta simulations, isn’t it?

      • Anonymous

        I had to look it up, and it seems beta-level sims are often described as non-sentient representation tools. I’m talking about the approximative reconstruction of the actual sentient person, either as a digital emulation or a biological clone.

        But I’m sure the idea itself isn’t new.

    • Rationalist

      this is probably worth doing to bolster a less-than-perfect cryopreservation.

      I wonder what the chance of a partially successful cryo-resurrection is? I would imagine pretty low… 

  • burger flipper

    You’re going to be at the Daggre thing next month, right?

    Any chance of a Los Angeles OB meet-up?

  • Robert Koslover

    It is noteworthy that the once-elaborate and expensive system of road-side emergency call-boxes has now basically disappeared from our highways, with its functions replaced by the prevalence of private cell-phones.  I predict that in the not-too-distant future, highway signs and traffic lights will likewise disappear.  Their replacement will be local transmitting systems that convey the key information (at the proper times and places) directly to heads-up style displays in your vehicle (or in your helmet, if riding a motorcycle).  This will cut away the costs of sign and traffic-light operations and maintenance and will also facilitate the implementation of creative and superior traffic control systems.  And of course, there could be advertising too, removing the need to have any billboards.  Future highways thus may appear remarkably unadorned, to the naked eye.

  • Nicholas Walker

    Unlimited wealth is bad. So is immense fame, which necessitates security like Secret Service. What’s the inflection point beyond which additional increases in wealth don’t add much to the comfort of life, but increase the burden and scrutiny? $75,000 is noted as the equivalent income threshold, but what about wealth? Also, why is there an inflection point?

    • Robert Koslover

       I think it depends on what you want.  For example:

      Luke: She’s rich.
      Han Solo: [interested] Rich?
      Luke: Rich, powerful. Listen, if you were to rescue her, the reward would be…
      Han Solo: What?
      Luke: Well, more wealth than you can imagine!
      Han Solo: I don’t know, I can imagine quite a bit.

      {Borrowed from http://www.imdb.com/character/ch0000002/quotes}

  • jhertzli

    After considering the posts on farmer vs. forager morality on this blog, I wondered if anybody else had a comment on the “So God made a farmer” ad on the Superbowl?

    • Dave944

       Interesting that the farmer ,like the warrior ( the missing man in the F/F dichotomy who kicks both of them around) is deified  but what would Paul Harvey say about the forger? 

      • Dave

         And the forager too.

  • KL

    What do you think of Intermittent Fasting as a way to improve health or live longer?

    • http://www.gwern.net/ gwern

       My current take is that it’s less likely to work than caloric restriction, caloric restriction has taken hits in the 2 recent primate studies so that’s not very likely in the first place, but intermittent fasting has the major advantage of being much easier since you don’t need to change the food you eat and you also aren’t running the risk of malnourishing yourself if you don’t eat *exactly* right. (Example: I mentioned in a previous thread an old family friend who is still in the hospital, at least partially because he was starving himself by incompetently doing caloric restriction.)