Open Thread

Here is our monthly place to discuss issues not covered in our other posts.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • Karo

    How does Eliezer feel about the fact that Google beat him in the race to create the first ever AI? Without consulting him at all about its Friendliness or some such fluff?

  • Fenn

    Is that an a April fool’s joke? It sure passes the Turing test.

  • Matt
  • http://changegrow.com James Andrix

    No one really knows when the Piscean Age ends and the Aquarian Age begins. Astrologers have been arguing about the issue for years. But here’s what to watch for: When the transition gets underway, fewer and fewer people will be invested in belief systems, and an ever-growing contingent will thrive on asking questions and keeping an open mind. For those of us in the latter category — the Aquarian Agers — we will prize the virtues of curiosity. We will avoid being addicted to dogmatic theories and rigid certainties, knowing that they tend to shut down our fluid intelligence. We will get a kick out of shedding our own emotional biases so that we can strive to be more objective in our understanding of the ever-evolving truth. I mention this, Aquarius, because it is an excellent time for you to charge headlong toward the Aquarian Age.
    Free Will Astrology

  • Fetterkey

    Yes, it’s an April Fool’s Day joke.

  • Zydeco

    Robin: 1 to 10, how convincing did you find Eliezer’s many worlds case?

  • josh

    Is mainstream political opinion moving predictably leftward direction over time a la Mencius Moldbug? If true, what implications should this have for our own beliefs?

    If one wishes to adjust one’s priors toward a majoritarian position, should one include people both past and present? What about your priors regarding future majority opinion? Should that count for anything?

  • anonym
  • Ben Jones

    Haha, loving Cadie. See you on the other side everyone!

  • mjgeddes

    If Google would have actually put a fraction of the effort that they put into dreaming up the April’s Fools joke into actually *doing* something about AGI , we’d be there by now. The Google people are the fools.

    To sum up my differences with the long Yudkowsky series on OB, the following table compares Yudkowsky’s views with my own.

    Yudkowsky:

    Bayesian induction the foundation of rationality
    Values a function of the human brain
    Volition/Freedom the foundation of values
    Many worlds QM interpretation
    Intelligence independent of consciousness & values
    Single-level reality; reductionism

    Geddes:

    Analogy formation the foundation of rationality
    Universal terminal values built into universe
    Beauty the foundation of values
    3-level-time hidden-variable QM interpretation
    Intelligence dependent on consciousness & values
    Multi-level reality; failure of reductionism

    As readers can see, we apparently disagree on everything.

  • http://liveatthewitchtrials.blogspot.com/ davidc

    Eliezer at some point to anti-virus programs become immoral? There are currently IM chatbots that attempt to phish bank account details from people. If these become capable of passing the Turing test (or some other agreed upon test) can they morally just be deleted?

  • M.P.

    One thing that was silly about the whole Google narrative, though– any (sane) organization able to make a strong AI would understand the dangers of it getting loose, and would be very much invested in preventing that from happening. The idea of such a thing escaping stretches my suspension of disbelief.

  • http://profile.typepad.com/SoullessAutomaton a soulless automaton

    mjgeddes, do you actually have a rigorous formulation of analogy formation or are you just handwaving this? Seriously, if this is the foundation of rationality, please explain it in a way that does not assume neural cognition as a prerequisite.

    One thing that was silly about the whole Google narrative, though– any (sane) organization able to make a strong AI would understand the dangers of it getting loose, and would be very much invested in preventing that from happening. The idea of such a thing escaping stretches my suspension of disbelief.

    Does history indicate that the originators of a complicated technology have typically understood and predicted its full effects? For instance, consider that the drug Sildenafil was first synthesized for the purpose of treating hypertension…

  • http://t-a-w.blogspot.com/ Tomasz Wegrzanowski

    One thing I’d love to see something written about is placebo effect in animals. Is it a human-only phenomenon, or does it exist in other animals too? I’ve never seen anything about it, and this is a subject of very big consequences no matter if the answer is positive or negative.

  • mjgeddes

    >mjgeddes, do you actually have a rigorous formulation of analogy formation or are you just handwaving this?

    Unfortunately the ideas still must have the status of ‘wild speculation’ at this time. *I know* I’m right about everything of course, but unfortunately rigorous formulations require a lot of brain-power and the time and money to spend all day and all night working on it, which I don’t have and can’t do, respectively.

    All I can do is state the facts and hope someone picks up on it. I have finally seen a hint of a ‘break’ on OB with Robin Hanson’s threads on the low-entropy puzzle – which I’m sure is the beginning of the chain of reasoning which will eventually validate my ideas – Hanson has realized something is very wrong with standard thermodynamics. This *must* be pursued further.

    Also, Douglas Hofstadler (one of the worl’d top AI researchers) is following my line on analogy formulation. In a recent interview in ‘Scientific American’ he states:

    “I’m working on a book with a French colleague, Emmanuel Sander, which is about how I see analogy as being the core of all of human thought”

    Link:

    Hofstadter Interview

  • http://profile.typepad.com/SoullessAutomaton a soulless automaton

    A simple reply of “yes, I am handwaving it” would have sufficed.

    Also, you have an interesting definition of “following”; Hofstadter has been on about analogy formation for the better part of three decades and, to my knowledge, work in this direction hasn’t achieved anywhere near the success that Bayesian induction has. Nice try, though.

  • Eunuch

    As for overcoming bias–I have reduced testosterone through drugs and then castration. Questions?

  • mjgeddes

    >work in this direction hasn’t achieved anywhere near the success that Bayesian induction has.

    Bayesian Induction alone is never going to work. The reason is this: to avoid computational intractability, many different types of knowledge representations are required. Induction alone can’t provide the interfaces between the different representations, you need a system of mapping between different representations and analogy formation does that.

    Further, Induction itself can be reinterpreted as just another analogy, since Induction involves a mapping between the *functional* properties of things – it depends on causal mappings, which rely on the fact that the mapping between states at two different time coordinates is ‘smooth’ or ‘continuous’.

    Academics have to be blind to overlook this whoppingly obvious fact.

    To quote Hofstadler again;

    “One should not think of analogy-making as a special variety of reasoning (as in the dull and uninspiring phrase “analogical reasoning and problem-solving,” a long-standing cliché in the cognitive-science world), for that is to do analogy a terrible disservice. After all, reasoning and problem-solving have (at least I dearly hope!) been at long last recognized as lying far indeed from the core of human thought. If analogy were merely a special variety of something that in itself lies way out on the peripheries, then it would be but an itty-bitty blip in the broad blue sky of cognition. To me, however, analogy is anything but a bitty blip — rather, it’s the very blue that fills the whole sky of cognition — analogy is everything, or very nearly so, in my view.”

    Ref:
    Presidential Lectures:DouglasR.Hofstadter

    Another example of the blinkered nature of academics is the so-called Newcomb’s Problem. What problem? You take the one box only, I don’t need some super-duper decision theory to tell me this whoppingly obvious fact, just plain old Geddesian common sense.

  • anon

    The NYTimes has an article on medical treatments which don’t work. It lists various examples and discusses the “allure” of ideology-based health care. (Hat tip to slashdot)

  • mjgeddes

    Note on my new interpretation of QM:

    I finally abandoned Many Worlds (MWI). Basically, I finally realized that QM actually implies a failure of reductionism. I now believe all the problems of QM come from conflating what are really different irreducible levels of reality. It’s no surprise that Yudkowsky and co. are many-worlds believers, I now think this is a gross confusion arising from the attempt to interpret reality with a single-level model of causality (reductionism).

    Listen:

    “The delving of the dark energy diving through the sea of starry treasure powers the perfect spheres of the rain-drops ringing Paris, and the wetness in the eyes of the actress reflects the dripping candy floss orbs eaten on the amusement wheel, whose rotation sparks the royal crystal glass held in hand, cupping the champagne from the court of Champagne'”

    There now. Doesn’t that do more for you all than years of posts on Bayesian rationality ever will? 😉

  • uncanny

    How come my comment on the Uncanny Valley post has disappeared.

  • http://timtyler.org/ Tim Tyler

    AGI-09 videos out – including some by Robin Hanson:

    http://vimeo.com/3981759 – Economics of A.I.

    http://vimeo.com/4007677 – Hard Takeoff Panel – Robin Hanson & Hugo de Garis

  • http://timtyler.org/ Tim Tyler