A.I. Old-Timers

Artificial Intelligence pioneer Roger Schank at the Edge:

When reporters interviewed me in the 70’s and 80’s about the possibilities for Artificial Intelligence I would always say that we would have machines that are as smart as we are within my lifetime. It seemed a safe answer since no one could ever tell me I was wrong. But I no longer believe that will happen. One reason is that I am a lot older and we are barely closer to creating smart machines. 

I have not soured on AI. I still believe that we can create very intelligent machines. But I no longer believe that those machines will be like us….

What AI can and should build are intelligent special purpose entities. (We can call them Specialized Intelligences or SI’s.)  Smart computers will indeed be created. But they will arrive in the form of SI’s, ones that make lousy companions but know every shipping accident that ever happened and why (the shipping industry’s SI) or as an expert on sales (a business world SI.) … So AI in the traditional sense, will not happen in my lifetime nor in my grandson’s lifetime. Perhaps a new kind of machine intelligence will one day evolve and be smarter than us, but we are a really long way from that.

This was close to my view after nine years of A.I. research, at least regarding the non-upload A.I. path Schank has in mind.  I recently met Rodney Brooks and Peter Norvig at Google Foo Camp, and Rodney told me the two of them tried without much success to politely explain this standard "old-timers" view at a recent Singularity summit.  Greg Egan recently expressed himself more harshly:


The overwhelming majority [of Transhumanists] might as well belong to a religious cargo cult based on the notion that self-modifying AI will have magical powers.

The June IEEE Spectrum is a special issue on singularity, largely skeptical.

My co-blogger Eliezer and I agree on many things, but here we seem to disagree. Eliezer focuses on AIs possibly changing their architecture more finely and easily than humans.  We humans can change our group organizations, can train new broad thought patterns, and could in principle take a knife to our brain cells.  But yes an AI with a well-chosen modular structure might do better. 

Nevertheless, the idea that someone will soon write software allowing a single computer to use architecture-changing ease to improve itself so fast that within a few months the fate of humanity depends on it feeling friendly enough … well that seems on its face rather unlikely.  So many other huge barriers to such growth loom.  Yes it is possible and yes someone should think some about it, and sure why not Eliezer.  But I fear way too many consider this the default future scenario.

Added:  To clarify, the standard A.I. old-timer view is roughly that A.I. mostly requires lots and lots of little innovations, and that we have a rough sense of how fast we can accumulate those innovations and of how many we need to get near human level general performance.  People who look for big innovations mostly just find all the same old ideas, which don’t add that much compared to lots of little innovations.

More added:  I seem to be a lot more interested in the meta issues here than most (as usual).  Eliezer seems to think that when the when young disagree with the old, the young tend to be right, because "most of the Elders here are formidable old warriors with hopelessly obsolete arms and armor."  I’ll bet he doesn’t apply this to people younger than him; adding in other consideration he sees his current age as near best.  And I’ll bet in twenty years his estimate of the optimal age will be twenty years higher. 

GD Star Rating
a WordPress rating system
Tagged as: ,
Trackback URL:
  • mitchell porter

    When you started out in AI, did you have a different view?

    When I try to think soberly about this, I end up thinking “How hard can it be?” and attributing the “failure” of AI so far primarily to hardware limitations, and perhaps to theories fashioned on the assumption that yesterday’s hardware would be enough. Working with a modern server farm (with thousands of processors) at one’s disposal would be qualitatively different than just using a single 286, surely! And you can break down the process of software development itself into modular intellectual tasks running on different clusters in the server farm, so there’s your self-enhancement… and transistors are so much faster than neurons, how can it not become superhuman once it even gets within range of human intelligence… Surely a research program on the scale of that which produces a major operating system (tens of developers, 5-10 years) would be capable of doing it! The only theoretical counterargument which I find remotely plausible is that finding self-modifications which provably constitute a probable improvement becomes very difficult. The search spaces become rather large, and perhaps evaluating the value of a possible modification becomes very difficult.

  • steven

    Egan doesn’t object to artificial general intelligence as far as I know, he objects to the idea that AIs will ever be qualitatively smarter than humans. (Shorter Egan: “Yes but whatever AIs can do, humans can do the same thing if you give them Mathematica, an infinite supply of coffee, and 10^12 as much time”. I don’t care; a sufficiently large quantitative difference is a qualitative difference.)

  • http://profile.typekey.com/tim_tyler/ Tim Tyler

    Timescales are never easy – but IMHO, it is indeed possible that the first AI will effectively take over the world. I.T. is an environment with dramatic first-mover advantages. It is often a winner-takes-all market – and AI seems likely to exhibit such effects in spades.

    If so, much may well depend on the identities and intentions of the designers – on whether we have the NSA, DARPA or Google building it.

    A company-born AI would face several challenges. It would have to avoid being dismembered by the Monopolies and Mergers Commission. Then it would have to take control of the government. Doable targets, perhaps, but challenges nontheless. A governmental AI could skip over these steps.

  • http://profile.typekey.com/tim_tyler/ Tim Tyler

    Re: “A powerful artificial intelligence won’t spring from a sudden technological “big bang” — it’s already evolving symbiotically with us” – http://spectrum.ieee.org/jun08/6307

    This seems like a midleading way of putting it to me. What is happening now is surely a sudden, technological “big bang”:

    “Technology is exploding in the same way that the atomic nucleii in a nuclear bomb explode: by exhibiting an unconstrained exponential growth process.”

    http://alife.co.uk/essays/technology_explosion/

  • http://transhumangoodness.blogspot.com/ Roko

    Robin Said: “the idea that someone will soon write software allowing a single computer to use architecture-changing ease to improve itself so fast that within a few months the fate of humanity depends on it feeling friendly enough … well that seems on its face rather unlikely. So many other huge barriers to such growth loom.”

    would you be able to quantify this, i.e. make the statement precise and then place a (small) probability on it? What is your evidence for this skepticism?

  • poke

    The problem with classical AI was the total lack of interest in biology. It was like watching a landlocked tribe who’ve never seen water trying to build a boat based on crude speculation about what’s beyond the forest. This is a continuing problem with cognitive science and psychology (albeit more among theorists than practitioners these days). If AI is going to happen it’ll be in the form of simulations of human neurobiology (presumably this is your upload path). Adding to that efficiency, reduction to functional components, modularity and the ability to (non-crudely) self-modify is somewhere between extremely difficult and impossible. It’s certainly far beyond the realm of reasonable speculation.

  • http://hanson.gmu.edu Robin Hanson

    Mitchell and Roko, it did seem easier when I started out. As in many fields, the process of becoming an old-timer teaches you that some things can indeed be very hard.

    Tim, no I.T. project has yet come anywhere close to taking over the world.

    Steve, I’ll take your word for it.

  • http://profile.typekey.com/michaeljameswebster/ michael webster

    Be interested in your response to this:

    http://reverendbayes.wordpress.com/

  • bambi

    Somehow this fanciful technology speaks directly to the deepest fears and dreams of some people, and those otherwise intelligent people start gibbering in response. Suddenly many orders of magnitude get tossed aside (piffle, just a bunch of zeros!), somehow sci-fi bedtime stories become the “default” position (which causes puzzled demands to disprove the wildest speculations in complete disregard of where the burden of proof lies).

    “How hard can it be? A little math, a little programming, and boom! Logic Bomb!” somehow seems like a rational conclusion when it is actually just a massive brain-short-circuit.

    Nobody, ever, has achieved a single impressive bit of progress on any aspect of “general” intelligence.

    Just one example of the derangement commonly displayed: supposedly, this imminent self-improving AI will in short order invent molecular nanotechnology while we’re not watching. But this invention requires vast ability to predict how molecular structures behave. Our hugest supercomputers are woefully inadequate to even begin such large simulations (assemblers or complete nanofactories), much less sort through many alternatives and all of the manufacturing steps required to actually produce the end product. Compare what folding@home accomplishes with the general program of designing and constructiong molecular nanotechnology. Many many many orders of magnitude. I guess as usual the lunatic response is to invent more magical powers. The AI will simply completely revolutionalize molecular simulation so our pocket calculators can simulate large systems in tiny memories. Hey, you can’t prove it’s impossible!

    Here’s another: one of the simplest preliminary steps for the commonly-viewed reductionist approach to building AI would involve formalizing basic mathematics and being able to generate and prove hypotheses. But look (I mean actually look, not skim) at work like Mizar, and think about how far we are from doing anything even mildly impressive on this most basic step. Never mind actually formalizing basic physics, a task nobody even has a clue how to do.

    The scariest thing is that (as evidenced in yesterday’s commentary), people start to think and even sometimes talk about getting their guns out to protect humanity from scary movie futures. Stuck in a dizzying effort to evaluate infinity times zero, how long until somebody calculates that the global risk of some AI researcher succeeding is too great?

    Now, just because no significant progress has been demonstrated yet, that doesn’t mean that there won’t be any. People are trying and presumably learning, and computers are getting more powerful which could conceivably help. But is it an urgent issue? Yudkowsky is apparently working on the issue but hasn’t published anything on the subject in many years and now seems more amused by theoretical physics so it doesn’t seem that urgent even to him.

  • steven

    Robin, I don’t have any more information on what Egan thinks than you do; it’s just that that was my interpretation of his later posts in the metamagician comment thread.

  • ad

    I.T. is an environment with dramatic first-mover advantages.

    Was Google the first search engine?

    I find it hard to believe that there will ever be one AI in the world. None, or thousands.

    And I am skeptical about the idea that there is a unitary thing called “intelligence”. There will be SI’s before there are AI’s, because they are more useful and more simple.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    All commenters: Do not attempt to duplicate, ad-hoc, the last several years of discussion in this field… for an introduction to my own viewpoint, see Artificial Intelligence as a positive and negative factor in global risk, the book chapter I did for Nick Bostrom.

  • http://profile.typekey.com/tim_tyler/ Tim Tyler

    Re: “no I.T. project has yet come anywhere close to taking over the world.”

    I would cite the development of the human brain as an example of a development
    in I.T. that has led to an ongoing takeover of the world. That’s only three times the size of its predecessors – AI will rather quickly have
    much more dramatic effects.

    Re: “Regarding advanced machine intelligence, my guess is that our best chance of achieving it within a century is to put aside the attempt to understand the mind, at least for now, and instead simply focus on copying the brain.” – http://spectrum.ieee.org/jun08/6274/2

    Not a chance, IMHO. This is Kurzweil-style thinking. Engineered Intelligences will come first by a loooooong way, as almost everyone else seems to agree.

    Copying the brain would not only slow – and a case of setting the sights too low – it would also be dangerous, because of poor control over the goal system.

  • Caledonian

    Who can say what the result will be of a serious program to construct an intelligent machine? It’s never even been attempted.

  • Ian C.

    Thanks for the excellent links. The comment at the top of the Edge page about using Google to resolve ambiguity in natural language processing was also very clever I thought.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, how exactly is one to systematically review the last several years of discussion of this field, before entering the discussion? Surely just reading a summary of your view is not sufficient. I know of no systematic review articles on these discussions.

  • Grant

    This is one thing I’ve always wondered about: While AI be monolithic or Hayekian (SI)?

    Traditionally, knowledge in human societies increases with increased specialization, but humans have brains of more or less fixed computing power. We can’t grow another one if we want to learn a new trade. Our specialists frequently use different terminology, come from different cultural backgrounds, and generally can’t understand other specialists outside of their field. Would AI be able to do better? Why? If it could do better, would it be economically efficient for it to take that route vs. many SIs?

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    @Hanson: Well, it’s not like we’re talking about something important here, just the fate of humanity, so you can’t expect much in the way of systematic review articles.

    But if you read one or two non-ad-hoc articles, you at least have an idea of what real analysis looks like, and that you aren’t allowed to just make everything up.

    Was your “Added” put there before or after I published my reply? Because it may be that the root of our disagreement reflects my own visceral experience of starting out expecting AI to be a huge amount of detailed drudgery, just like everyone said, but knowing that it had to be done anyway, maybe via Manhattan Project; and then seeing that there are deep insights after all (e.g. Bayes, Pearl), around the same time I realized that sloppy design would result in certain doom.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, my added was after your post today. I agree that a smart new comer to the field comes across many big insights which seem far more powerful than just lots of little details, and meets and reads many folks who don’t seem to understand these insights. But if you meet and read the best old-timers, you find that they already understood similar big insights long ago, even if they expressed them in different ways. The field overall isn’t accumulating much in the way of big insights, what the field overall accumulates are mostly lots of little insights, and of course more powerful hardware.

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    How do old-timers address Kurzweil’s argument about how exponential growth in computing power will make AGI feasible for the first time in the mid-21st century? The flip side of that argument is that it almost might be a waste of time trying to pursue it before our hardware and software gets to the point where it’s feasible. I’m not saying Kurzweil’s right, but old-timers making an argument that AGI won’t be available for multiple generations should address the Kurzweilian strain of argument rather than ignore it.

  • http://www.videogameworkout.com Glen Raphael

    Caledonian, what would you consider “a serious program to construct an intelligent machine”? I’d claim the task has been attempted many times, with some of the earliest attempts in the 1970s. Some are still going on today. The problem isn’t hardware speeds – basic algorithm theory says that if it can be done at all using a computer, you could probably build one today if you knew how to solve the *software* problem. It might be very very slow, but increasing the clock speed doesn’t solve the basic problem that we still don’t really know how to write the software that would produce even a very dumb level of general-purpose AI. Instead we have lots of special-purpose AIs, ones that can play chess or find the shortest path from point A to point B or push a block off a platform.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    The academic field is set up to produce little insights. What, you really think we’re out of big insights here? You don’t think there’s are big insights the size of conditional independence that might underlie relational probabilistic reasoning, or program synthesis, or reflectivity?

    Big insights traditionally take lots of time to produce. You’re reading a textbook, and there’s an equation (11) and then an obvious modification of the equation (12), and there’s a citation on (11) from 1972 and a citation on (12) from 1975. Your eyes just skip over it, because it looks normal, but what it represents is three years of work to go from (11) to (12).

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Another possible point of disagreement here: “Big insights” can be the end product of lots of individual contributions. But then that’s what the work done by researchers looks like. It doesn’t mean that the AI you can now build, the output of the research, is a special-purpose program with lots of little parts.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, yes big insights may result from many smaller efforts. I don’t see that this corner of academia rewards big insights much less than most other corners. The question isn’t whether there are more big insights, but whether lacking them is our main limitation on achieving A.I. We may need both a few more big insights and lots of little insights, in which case even when we have all the needed big insights progress may be limited to the rate we can accumulate small insights.

  • http://profile.typekey.com/halfinney/ Hal Finney

    Robin, your concern about how one might begin to learn the state of consensus in a new field is one I have frequently expressed, on a much wider range of topics. Even identifying appropriate review articles is still a challenge for me.

    I get the impression from some things Eliezer has written, that his overriding concern is preventing the development of “unfriendly” AI that could dominate the world. I wonder if his apparent disagreement with you may reflect not so much actual differences in estimations of probability, as strategic calculation that his course of public advocacy is the best way to prevent evil AI. Indeed, given this hypothetical moral perspective, one would be essentially obligated to adopt such a policy. Based on some of Robin’s ideas, I would suggest that such a disconnect between internal beliefs and public positions should not be seen as wicked hypocrisy, but rather as a service to humanity.

    Eliezer’s and Robin’s mutual familiarity with the disagreement theorems would further suggest that their apparent disagreement is illusory, suggesting strategic behavior by at least one of them.

  • bambi

    Eliezer, a future blog entry about what big insight(s) you think are missing would be quite interesting.

    I had pretty much thought that AI (as opposed to computational neuroscience) has zero big insights beyond its “Aristotle in a bottle” roots, but I’ll take seriously your suggestion that Bayes and Pearl have provided some steps toward a foundation. I have Pearl’s book on order and will rethink my not-very-carefully-obtained view that Bayesianism is rarely useful because in actual reasoning about the world priors are almost always inaccurate to the point of uselessness, and defining what ‘A’ and ‘B’ are in, P(A|B) is almost always hopeless.

    Despite my irritation expressed at what strikes me as unjustified apocalyptic hysteria, you clearly have a keen mind and I appreciate your entertaining writing.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    @Hal Finney: I don’t play those kinds of games. I am sometimes strategic about what I do not say or not emphasize, but if I say it, I believe it.

    @Bambi: You want to start with “Probabilistic Reasoning in Intelligent Systems”, not “Causality”. I did this in the wrong order and it was unfun.

  • Shane

    You mis-spelled “Schank.”

  • http://transhumangoodness.blogspot.com/ Roko

    Roko, it did seem easier when I started out. As in many fields, the process of becoming an old-timer teaches you that some things can indeed be very hard.

    – but how? What precisely did being an AI old timer tech you (or shank)? I’m considering going into the field, (I am literally finishing my final exams and wondering where to go for a PhD) and if you can give me some solid evidence as to why it is unreasonable to expect AGI research to succeed in our lifetimes, then I will consider binning the idea and going for an easier life. But I’m looking for evidence rather than sentiments.

  • http://hanson.gmu.edu Robin Hanson

    Shane, thanks, fixed it in this post and in Eliezer’s.

  • poke

    Going beyond my initial reaction; I think Schank is wrong too. My earlier critique that AI ignored biology extends beyond general intelligence to “small problems” in AI such as computer vision, voice recognition, text-to-speech, semantics and handwriting recognition and even problems in robotics (movement, navigation, etc). There isn’t a Crystalline Platonic Realm of AI problems. There’s no such thing as “the problem of vision” for example. The reason all these research areas have been disappointing is that they have no genuine object of research; they’re chasing a phantasm.

    The illusion that vision is an software engineering problem that can be solved through introspection and reflection probably stems from two common misconceptions. The first is the belief that introspection is infallible. The truth is that when you look at a photograph and you conclude that recognizing the objects present is a fairly straightforward software engineering problem your introspection is leading you astray. It looks like a well-defined problem precisely because you have no access to the workings of your visual system. In reality vision is an utterly arbitrary biological phenomenon.

    The second stems from the illusion of generality that evolution presents to us. We think vision is well-defined because so many animals have visual systems, just as we think there’s something called intelligence we can arrange on a continuum by reference to the different behavior of different animals and the illusion of evolutionary progress, but this apparent generality is an illusion of heredity. There’s just the specific biological mechanism we label “vision” and not a general problem awaiting a general engineering solution. (Pervasive gross misinterpretations of selection compound the issue.)

    IMHO the failure of AI was predictable and its existence as an area of scientific research borderline scandalous.

  • Joseph Knecht

    @Poke: your first point with regard to misconceptions is a straw man: *nobody* in their right mind would ever say that introspection is infallible, nor have I ever read of a non-crank who held such a position either.

  • Douglas Knight

    poke,
    are you saying that vision is hard because the output isn’t really any clean format, but part of our representation of the world? and, moreover, there’s feedback from our knowledge of the world, telling us what to expect to see; so that we might as well be working on general intelligence?

  • http://hanson.gmu.edu Robin Hanson

    See my more added to the post.

  • Cyan

    …rethink my not-very-carefully-obtained view that Bayesianism is rarely useful because in actual reasoning about the world priors are almost always inaccurate to the point of uselessness, and defining what ‘A’ and ‘B’ are in, P(A|B) is almost always hopeless.

    I suggest “Bayesian Data Analysis, 2nd ed.” by Gelman et. al. If you’re tackling Pearl on causality, then this is the right book for you on practical applications of the Bayesian approach. (If you want fundamental justification, I suggest reading the first two chapters of Jaynes’s “Probability Theory: The Logic of Science”; they cover the Cox Theorems that provide grounds for using the Bayesian approach.)

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    I wish I had known what I knew now at age 15. I think I could have made better use of it then.

    I figure it takes around ten years to produce an FAI researcher. I figure I hit my expiration date at forty – that’s when my father suddenly turned old. So I’m writing a series of letters to my successors, who will probably be around 15-17 when they read them, and then they have ten years to follow the path, with more of a boost than I ever got.

    The possibility is never far from my mind, that whatever I do, and however far I or any other FAI researchers manage to get, it’s going to have to be transmitted to someone who learns the theory at 18, before it can be used natively.

    Maybe us Old Fogeys will still have a reservoir of experience that makes us strong and valuable. Maybe not.

    No, I don’t think the young are generically smarter than the old, or generically more trustworthy. But I don’t trust many people at all, old warriors or young. When I take a stand on a Singularity issue, I’m standing in the direct center of my expertise. Anyone who wants to argue with that can argue with my arguments; it would be silly for me to trust their authority. I wouldn’t argue with Sebastian Thrun about mobile robotics, or with Peter Norvig about search. If they want to argue with me about recursive self-enhancement, they’re welcome to present arguments.

    By default, I have to assume that their knowledge of my professional sphere is at the standard bright-informed-amateur level, because that’s what it usually is with AIfolk.

  • http://dl4.jottit.com/contact Richard Hollerith

    Heh, I was just going to mention the age of 40 as the point past which the brain is too old to wield the knowledge necessary to make the kind of predictions Schank is trying to make. So for example, while Schank can read E.T. Jaynes’s Probability Theory: the Logic of Science as soon as it is published just as Eliezer can, Schank is over 40 when it is published, so he cannot rotate and transpose the material in his head like Eliezer can.

  • http://dl4.jottit.com/contact Richard Hollerith

    And Robin: I’m 47 so it will not work to reply to me that after I turn 40 I will probably have some other theory of how scientific ability varies with age which favors people in their 40s.

  • http://profile.typekey.com/tim_tyler/ Tim Tyler

    Re: “Was Google the first search engine?”

    No, but look at Microsoft or Intel.

    Of course the first seed AI being the ancestor of the last AI is far from a certain outcome.

    E.g. maybe the builders of the the first seed AI will cripple it with takeoff constraints – and so inadvertently allow a subsequent AI to take over before its air supply can get cut off, and its lunch can be eaten.

    Also, the rise to power of these things may take more than “a few months”.

  • Marc Geddes

    Steven Pinker mentions a putative ‘language of thought’ in his new book ‘The Stuff of Thought’.

    I sent Pinker an e-mail saying that it sounded like he was looking for a general purpose ‘Upper Ontology’:

    Upper Ontology

    Pinker’s comment:

    “Yes, I agree that what I am calling a language of thought is closely related to what computer scientists call an ontology.”

    I dropped the concept of a putative ‘Universal Parser’. Pinker’s comment:

    “I’m not sure whether a universal parser is feasible (in practice – in principle, I’d insist that it is). As I note in chapter 8 (and in the “Talking Heads” chapter in The Language Instinct), sentence interpretation in context requires considerable knowledge about the speaker’s intentions, which may require duplicating a good part of the speaker’s social and cultural knowledge base. That doesn’t seem to be that easy to implement, especially if it is meant to apply cross-culturally, to any language. But perhaps some day.”

    Upper Ontology. Universal Parser. Hmm. Sounds a possible new ‘big insight’ into AGI.

    An Upper Ontology for General Purpose Reality Modelling.

    Hee hee…

  • http://www.cawtech.freeserve.co.uk Alan Crowe

    Section five of Artificial Intelligence as a positive and negative factor in global risk talks about using theorem provers in the design of silicon chips. I recognised the software in question, ACL2, An Computational Logic for Applicative Common Lisp. I’m interested in it as part of my vision of the medium term future of computer programming languages. I’ve downloaded the software and tried to learn to drive it.

    Notice that the vision I sketched goes too far. Provers, such as ACL2, can show that algorithms compute the same function, but they do not prove results about space and time requirements. They cannot express the idea than one algorithm is faster, or that another uses less memory. (Well, actually they can, you code an instrumented interpreter for the algorithms and prove results about the interpreter, but that is my point, there is another level required.) So my vision is not the next step, but since it builds on stuff that ACL2 cannot do, it is two steps on from current research. Also my vision is well short of general AI.

    My opinion is that we are conducting AI research three or more conceptual levels below where the action is, and can therefore make no direct progress. We can only enlarge and depend the computing culture, with the hope of moving up a level at some time in the future. Meanwhile you can download ACL2 from the University of Texas and get a feel for the state of the art. Then you can have an opinion too!

  • poke

    Joseph Knecht,

    The infallibility of introspection was a central belief in philosophy for hundreds of years. Most people nowadays don’t explicitly endorse the belief but their beliefs about how we should approach and understand the mind are clearly shaped by people who did.

  • Joseph Knecht

    Poke: you have to go back further than behaviorism to find a time when it was scientifically plausible to suppose that introspection is infallible, regardless of whether some philosophers of mind may have held the opinion more recently than that. Behaviorism itself was a reaction to the introspective methods of late 19th century psychology.

    I agree that there was extreme overoptimism in being able to understand cognition in the early days of AI, but they quickly realized things were not as simple as they seemed when they failed so miserably. And even in the early days, nobody would have accepted the much stronger belief you stated that “introspection is infallible,” which was my point.

    I think I agree with you your sentiment, but expressing that as “introspection is infallible” is profoundly misleading.

  • bambi

    Cyan, thanks for the references, I am tracking those down as well.

    To clarify (not that anybody cares), when I wrote “defining what ‘A’ and ‘B’ are in, P(A|B)” what I mean is that I want to see how this way of looking at reasoning doesn’t fail for the same reasons Eliezer (accurately IMO) refers to GOFAI as “suggestively named lisp tokens”. Bayesian update may be more sophisticated than pure deduction but the reference issue is what I’m really keen on understanding.

  • http://shagbark.livejournal.com Phil Goetz

    How do old-timers address Kurzweil’s argument about how exponential growth in computing power will make AGI feasible for the first time in the mid-21st century?

    If we had a computer today that had infinite memory, and could give the results to any terminating computation in zero time, we would not know how to build an AGI with it. (Some people are of the opinion that some type of lookup-table or theorem-prover could succeed in this case, but I disagree. There is not enough data for a lookup table, and we wouldn’t know how to formalize the world for the theorem prover.)

  • Unknown

    Phil, that point actually supports with Eliezer’s position that the problem of AGI is simply an issue of software.

    Of course, unfortunately for Eliezer, this also means that there is very little evidence regarding his proposed timeframe: Roger Schank and Daniel Dennett could easily turn out to be right.

  • Karen Patrick

    Looking forward to the day I can walk into my local Walmart and get the
    family AI. I am certainly up for the AI to taking the kids to their
    activities, helping with homework, preparing the kids for exams, walking
    the dogs, changing the cat box, cleaning the fish tank, doing the
    housework, mowing the lawn, working on the yard, planning our meals,
    doing the shopping, repainting my house, folding the laundry… ahhh…
    the possibilities are endless!

    Karen