After spending a decade or two living inside a mind, you might think you knew a bit about how minds work, right? That’s what quite a few AGI wannabes (people who think they’ve got what it takes to program an Artificial General Intelligence) seem to have concluded. This, unfortunately, is wrong.

    Artificial Intelligence is fundamentally about reducing the mental to the non-mental.

    You might want to contemplate that sentence for a while. It’s important.

    Living inside a human mind doesn’t teach you the art of reductionism, because nearly all of the work is carried out beneath your sight, by the opaque black boxes of the brain. So far beneath your sight that there is no introspective sense that the black box is there—no internal sensory event marking that the work has been delegated.

    Did Aristotle realize that when he talked about the telos, the final cause of events, that he was delegating predictive labor to his brain’s complicated planning mechanisms—asking, “What would this object do, if it could make plans?” I rather doubt it. Aristotle thought the brain was an organ for cooling the blood—which he did think was important: humans, thanks to their larger brains, were more calm and contemplative.

    So there’s an AI design for you! We just need to cool down the computer a lot, so it will be more calm and contemplative, and won’t rush headlong into doing stupid things like modern computers. That’s an example of fake reductionism. “Humans are more contemplative because their blood is cooler,” I mean. It doesn’t resolve the black box of the word contemplative. You can’t predict what a contemplative thing does using a complicated model with internal moving parts composed of merely material, merely causal elements—positive and negative voltages on a transistor being the canonical example of a merely material and causal element of a model. All you can do is imagine yourself being contemplative, to get an idea of what a contemplative agent does.

    Which is to say that you can only reason about “contemplative-ness” by empathic inference—using your own brain as a black box with the contemplativeness lever pulled, to predict the output of another black box.

    You can imagine another agent being contemplative, but again that’s an act of empathic inference—the way this imaginative act works is by adjusting your own brain to run in contemplativeness-mode, not by modeling the other brain neuron by neuron. Yes, that may be more efficient, but it doesn’t let you build a “contemplative” mind from scratch.

    You can say that “cold blood causes contemplativeness” and then you just have fake causality: You’ve drawn a little arrow from a box reading “cold blood” to a box reading “contemplativeness,” but you haven’t looked inside the box—you’re still generating your predictions using empathy.

    You can say that “lots of little neurons, which are all strictly electrical and chemical with no ontologically basic contemplativeness in them, combine into a complex network that emergently exhibits contemplativeness.” And that is still a fake reduction and you still haven’t looked inside the black box. You still can’t say what a “contemplative” thing will do, using a non-empathic model. You just took a box labeled “lotsa neurons,” and drew an arrow labeled “emergence” to a black box containing your remembered sensation of contemplativeness, which, when you imagine it, tells your brain to empathize with the box by contemplating.

    So what do real reductions look like?

    Like the relationship between the feeling of evidence-ness, of justificationness, and E. T. Jaynes’s Probability Theory: The Logic of Science. You can go around in circles all day, saying how the nature of evidence is that it justifies some proposition, by meaning that it’s more likely to be true, but all of these just invoke your brain’s internal feelings of evidence-ness, justifies-ness, likeliness. That part is easy—the going around in circles part. The part where you go from there to Bayes’s Theorem is hard.

    And the fundamental mental ability that lets someone learn Artificial Intelligence is the ability to tell the difference. So that you know you aren’t done yet, nor even really started, when you say, “Evidence is when an observation justifies a belief.” But atoms are not evidential, justifying, meaningful, likely, propositional, or true; they are just atoms. Only things like count as substantial progress. (And that’s only the first step of the reduction: what are these E and H objects, if not mysterious black boxes? Where do your hypotheses come from? From your creativity? And what’s a hypothesis, when no atom is a hypothesis?)

    Another excellent example of genuine reduction can be found in Judea Pearl’s Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference[1]. You could go around all day in circles talk about how a cause is something that makes something else happen, and until you understood the nature of conditional independence, you would be helpless to make an AI that reasons about causation. Because you wouldn’t understand what was happening when your brain mysteriously decided that if you learned your burglar alarm went off, but you then learned that a small earthquake took place, you would retract your initial conclusion that your house had been burglarized.

    If you want an AI that plays chess, you can go around in circles indefinitely talking about how you want the AI to make good moves, which are moves that can be expected to win the game, which are moves that are prudent strategies for defeating the opponent, et cetera; and while you may then have some idea of which moves you want the AI to make, it’s all for naught until you come up with the notion of a mini-max search tree.

    But until you know about search trees, until you know about conditional independence, until you know about Bayes’s Theorem, then it may still seem to you that you have a perfectly good understanding of where good moves and nonmonotonic reasoning and evaluation of evidence come from. It may seem, for example, that they come from cooling the blood.

    And indeed I know many people who believe that intelligence is the product of commonsense knowledge or massive parallelism or creative destruction or intuitive rather than rational reasoning, or whatever. But all these are only dreams, which do not give you any way to say what intelligence is, or what an intelligence will do next, except by pointing at a human. And when the one goes to build their wondrous AI, they only build a system of detached levers, “knowledge” consisting of LISP tokens labeled apple and the like; or perhaps they build a “massively parallel neural net, just like the human brain.” And are shocked—shocked!—when nothing much happens.

    AI designs made of human parts are only dreams; they can exist in the imagination, but not translate into transistors. This applies specifically to “AI designs” that look like boxes with arrows between them and meaningful-sounding labels on the boxes. (For a truly epic example thereof, see any Mentifex Diagram.)

    Later I will say more upon this subject, but I can go ahead and tell you one of the guiding principles: If you meet someone who says that their AI will do XYZ just like humans, do not give them any venture capital. Say to them rather: “I’m sorry, I’ve never seen a human brain, or any other intelligence, and I have no reason as yet to believe that any such thing can exist. Now please explain to me what your AI does, and why you believe it will do it, without pointing to humans as an example.” Planes would fly just as well, given a fixed design, if birds had never existed; they are not kept aloft by analogies.

    So now you perceive, I hope, why, if you wanted to teach someone to do fundamental work on strong AI—bearing in mind that this is demonstrably a very difficult art, which is not learned by a supermajority of students who are just taught existing reductions such as search trees—then you might go on for some length about such matters as the fine art of reductionism, about playing rationalist’s Taboo to excise problematic words? and replace them with their referents, about anthropomorphism, and, of course, about early stopping on mysterious answers to mysterious questions.


    [1] Pearl, Probabilistic Reasoning in Intelligent Systems.

    New to LessWrong?

    New Comment
    61 comments, sorted by Click to highlight new comments since: Today at 2:54 AM

    "Planes would fly just as well, given a fixed design, if birds had never existed; they are not kept aloft by analogies."

    Good analogy.

    Analogies are often helpful for communicating knowledge that you got by other means.

    Simply mimicking the human brain in an attempt to produce intelligence is akin to scavenging code off the web to write a program. Will you understand well how the program works? No. Will it work? If you can hack it, very possibly.

    It seems that we're only just beginning to learn how to hack nature. Personally.. I'd say it's a much more likely way to AI than deliberate design. But that may be just because I don't think humans are collectively that bright.

    Written any code lately? How's Flare coming along?

    Eliezer, do you work on coding AI? What is the ideal project that intersects practical value and progress towards AGI? How constrained is the pursuit of AGI by a lack of hardware optimized for it's general requirements? I'd love to hear more nuts and bolts stuff.

    JB, ditched Flare years ago.

    Aron, if I knew what code to write, I would be writing it right now. So I'm working on the "knowing" part. I don't think AGI is hardware-constrained at all - it would be a tremendous challenge just to properly use one billion operations per second, rather than throwing most of it away into inefficient algorithms.

    If you meet someone who says that their AI will do XYZ just like humans ... Say to them rather: "I'm sorry, I've never seen a human brain, or any other intelligence, and I have no reason as yet to believe that any such thing can exist. Now please explain to me what your AI does, and why you believe it will do it, without pointing to humans as an example."

    This seems the wrong attitude toward someone who proposes to pursue AI via whole brain emulation. You might say that approach is too hard, or the time is not right, or that another approach will do better or earlier. But whole brain emulation hardly relys on vague analogies to human brains - it would be directly making use of their abilities.

    Aron, I don't think anyone really knows the general requirements for AGI, and therefore nobody knows what (if any) kind of specialized hardware is necessary. But if you're a hardware guy and you want something to work on, you could read Pearl's book (mentioned above) and find ways to implement some of the more computationally intensive inference algorithms in hardware. You might also want to look up the work by Geoff Hinton et al on reduced Boltzmann machines and try to implement the associated algorithms in hardware.

    Eliezer, of course in order to construct AI we need to know what intelligence really is, what induction is, etc. But consider an analogy to economics. Economists understand the broad principles of the economy, but not the nuts and bolts details. The inability of the participants to fully comprehend the market system hardly inhibits its ability to function. A similar situation may hold for intelligence: we might be able to construct intelligent systems with only an understanding of the broad principles, but not the precise details, of thought.

    The following is a public service announcement for all Overcoming Bias readers who may be thinking of trying to construct a real AI.

    AI IS HARD. IT'S REALLY FRICKING HARD. IF YOU ARE NOT WILLING TO TRY TO DO THINGS THAT ARE REALLY FRICKING HARD THEN YOU SHOULD NOT BE WORKING ON AI. You know how hard it is to build a successful Internet startup? You know how hard it is to become a published author? You know how hard it is to make a billion dollars? Now compare the number of successful startups, successful authors, and billionaires, to the number of successful designs for a strong AI. IT'S REALLY FRICKING HARD. So if you want to even take a shot at it, accept that you're going to have to do things that DON'T SOUND EASY, like UNDERSTAND FRICKING INTELLIGENCE, and hold yourself to standards that are UNCOMFORTABLY high. You have got to LEVEL UP to take on this dragon.

    Thank you. This concludes the public service announcement.

    Robin, whole brain emulation might be physically possible, but I wouldn't advise putting venture capital into a project to build a flying machine by emulating a bird. Also there's the destroy-the-world issue if you don't know what you're doing.

    But I'm a level 12 hafling wizard! Isn't that enough?

    If you haven't seen a brain, "Nothing is easier than to familiarize one's self with the mammalian brain. Get a sheep's head, a small saw, chisel, scalpel and forceps..."

    In fact - you can just buy them at the butcher's shop - ready-prepared... often in pairs.

    Re: Simply mimicking the human brain in an attempt to produce intelligence is akin to scavenging code off the web to write a program. Will you understand well how the program works? No. Will it work? If you can hack it, very possibly.

    Except that this is undocumented spaghetti code which comes with no manual, is written in a language for which you have no interpreter, was built by a genetic algorithm, and is constructed so that it disintegrates.

    The prospective hacker needs to be more than brave, they need to have no idea that other approaches are possible.

    Re: I don't think anyone really knows the general requirements for AGI, and therefore nobody knows what (if any) kind of specialized hardware is necessary.

    One thing which we might need - and don't yet really have - is parallelism.

    True, there are FPGAs, but these are still a nightmare to use. Elsewhere parallelism is absurdly coarse-grained.

    We probably won't need anything very fancy on the hardware front - just more speed and less cost, to make the results performance- and cost-competitive with humans.

    Eliezer, if designing planes had turned out to be "really fricking hard" enough, requiring "uncomfortably high standards" that mere mortals shouldn't bother attempting, humans might well have flown first by emulating birds. Whole brain emulation should be doable within about a half century, so another approach to AI will succeed first only it is not really really fricking hard.

    Dan, I've implemented RBMs and assorted statistical machine learning algorithms in context with the NetflixPrize. I've also recently adapted some of these to work on Nvidia cards via their CUDA platform. Performance improvements have been 20-100x and this is hardware that has only taken a few steps away from pure graphics specialization. Fine-grained parallelization, improved memory bandwidth, less chip logic devoted to branch prediction, user-controlled shared memory, etc. help.

    I'm seeing a lot of interesting applications in multimedia processing, many of which have statistical learning elements. One project at Siggraph allowed users to modify a single frame of video and have that modification automatically adapt across the entire video. Magic stuff. If we are heading towards hardware that is closer to what we'd expect as the proper substrate for AI, and we are finding commercial applications that promote this development, then I think we are building towards this fricking hard problem the only way possible: in small steps. It's not the conjugate gradient, but we'll get there.

    Aron: What did those performance improvements of 20-100x buy you in terms of reduced squared error on the Netflix Prize?

    @Robin

    But unless you use an actual human brain for your AI, you're still just creating a model that works in some way "like" a human brain. To know that it will work, you'll need to know which behaviors of the brain are important to your model and which are not (voltages? chemical transfers? tiny quantum events?). You'll also need to know what capabilities the initial brain model you construct will need vs. those it can learn along the way. I don't see how you get the answers to those questions without figuring out what intelligence really is unless generating your models is extraordinarily cheap.

    For the planes/birds analogy, it's the same as the idea that feathers are really not all that useful for flight as such. But without some understanding of aerodynamics, there's no reason not to waste a lot of time on them for your bird flight emulator, while possibly never getting your wing shape really right.

    Eliezer: AI IS HARD. ... You have got to LEVEL UP to take on this dragon.

    • How long have you personally spent working on the AGI problem? I heard that at some point about 10 years ago, you and Ben Goertzel thought you could wrap up the AI problem in a few years. I also heard that both Robin and Nick Bostrom have worked on AI and given up. Given this data, it seems that the problem is probably beyond anyone; though this doesn't mean that it won't get solved bit-by-bit.

    Roko has a point there.

    I like "AI IS HARD. IT'S REALLY FRICKING HARD." But that is an argument that could cut you in several ways. Anything that has never been done is really hard. Can you tell those degrees of really hard beforehand? 105 years ago, airplanes were really hard; today, most of us could get the fundamentals of designing one with a bit of effort. The problem of human flight has not changed, but its perceived difficulty has. Is AI that kind of problem, the one that is really hard until suddenly it is not, and everyone will have a half-dozen AIs around the house in fifty years? Is AI hard like time travel? Like unaided human flight? Like proving Fermat's Last Theorem?

    It seems like those CAPS will turn on you at some point in the discussion.

    Eliezer, I suspect that was rhetorical. However.. top algorithms that avoid overtraining can benefit from adding model parameters (though in massively decreasing returns of scale). There are top-tier monte carlo algorithms that take weeks to converge, and if you gave them years and more parameters they'd do better (if slight). It may ultimately prove to be a non-zero advantage for those that have the algorithmic expertise and the hardware advantage particularly in a contest where people are fighting for very small quantitative differences. I mentioned this for Dan's benefit and didn't intend to connect it directly to strong AI.

    I'm not imagining a scenario where someone in a lab is handed a computer that runs at 1 exaflop and this person throws a stacked RBM on there and then finally has a friend. However, I am encouraged by the steps that Nvidia and AMD have taken towards scientific computing and Intel (though behind) is simultaneously headed the same direction. Suddenly we may have a situation where for commodity prices, applications can be built that do phenomenally interesting things in video and audio processing (and others I'm unaware of). These applications aren't semantic powerhouses of abstraction, but they are undeniably more AI-like than what came before, utilizing statistical inferences and deep parallelization. Along the way we learn the basic nuts and bolts engineering basics of how to distribute work among different hardware architectures, code in parallel, develop reusable libraries and frameworks, etc.

    If we take for granted that strong AI is so fricking hard we can't get there in one step, we have to start looking at what steps we can take today that are productive. That's what I'd really love to see your brain examine: the logical path to take. If we find a killer application today along the lines above, then we'll have a lot more people talking about activation functions and log probabilities. In contrast, the progress of hardware from 2001-2006 was pretty disappointing (to me at least) outside of the graphics domain.

    AGI may be hard, but narrow AI isn't necessarily. How many OB readers care about real AI vs. just improving their rationality? It's not that straightforward to demonstrate to the common reader how these two are related.

    Realizing the points you make in this post about AI is just like lv 10 out of 200 or something levels. It's somewhat disappointing that you actually have to even bother talking about it, because this should have been realized by everyone back in 1956, or at least in 1970, after the first round of failures. (Marvin Minsky, why weren't you pushing that point back then?) But is it bad that I sort of like how most people are confused nowadays? Conflicting emotions on this one.

    Whole brain emulation -- hm, sounds like some single human being gets to be godlike first, then. Who do we pick for this? The Dalai Lama? Barack Obama? Is worrying about this a perennial topic of intellectual masturbation? Maybe.

    Eliezer, what destroy-the-world issues do you see resulting from whole brain emulation? I see risks that the world will be dominated by intelligences that I don't like, but nothing that resembles tiling the universe with smiley faces.

    Roko, Ben thought he could do it in a few years, and still thinks so now. I was not working with Ben on AI, then or now, and I didn't think I could do it in a few years, then or now. I made mistakes in my wild and reckless youth but that was not one of them.

    [Correction: Moshe Looks points out that in 1996, "Staring into the Singularity", I claimed that it ought to be possible to get to the Singularity by 2005, which I thought I would have a reasonable chance of doing given a hundred million dollars per year. This claim was for brute-forcing AI via Manhattan Project, before I had any concept of Friendly AI. And I do think that Ben Goertzel generally sounds a bit more optimistic and reassuring about his AI project getting to general intelligence in on the order of five years given decent funding. Nonetheless, the statement above is wrong. Apparently this statement was so out of character for my modern self that I simply have no memory of ever making it, an interesting but not surprising observation - there's a reason I talk about Eliezer_1996 like he was a different person. It should also be mentioned that I do assess a thought-worthy chance of AI showing up in five years, though probably not Friendly. But this doesn't reflect the problem being easy, it reflects me trying to widen my confidence intervals.]

    Zubon, the thought has tormented me for quite a while that if scientific progress continued at exactly the current rate, then it probably wouldn't be more than 100 years before Friendly AI was a six-month project for one grad student. But you see, those six months are not the hard part of the work. That's never the really hard part of the work. Scientific progress is the really fricking hard part of the work. But this is rarely appreciated, because most people don't work on that, and only apply existing techniques - that's their only referent for "hard" or "easy", and scientific progress isn't a thought that occurs to them, really. Which also goes for the majority of AGI wannabes - they think in terms of hard or easy techniques to apply, just like they think in terms of cheap or expensive hardware; the notion of hard or easy scientific problems-of-understanding to solve, does not appear anywhere on their gameboard. Scientific problems are either already solved, or clearly much too difficult for anyone to solve; so we'll have to deal with the problem using a technique we already understand, or an understandable technology that seems to be progressing, like whole brain emulation or parallel programming.

    These are not the important things, and they are not the gap that separates you from the imaginary grad student of 100 years hence. That gap is made out of mysteries, and you cross it by dissolving them.

    Peter, human brains are somewhat unstable even operating in ancestral parameters. Yes, you run into a different class of problems with uploading. And unlike FAI, there is a nonzero chance of full success even if you don't use exact math for everything. But there are still problems.

    Richard, the whole brain emulation approach starts with and then emulates a particular human brain.

    Michael, we have lots of experience picking humans to give power to.

    AI IS HARD. IT'S REALLY FRICKING HARD.
    So start with artificial stupidity. Stupidity is plentiful and ubiquitous - it follows that it should be easy for us to reproduce.

    As it happens, we've made far more progress making computer programs that can 'think' as well as insects than can think like humans. So start with insects first, and work your way up from there.

    So start with artificial stupidity.

    [ INSERT MICROSOFT JOKE HERE ]

    Aron:If we take for granted that strong AI is so fricking hard we can't get there in one step, we have to start looking at what steps we can take today that are productive.

    Well we probably want to work on friendly goal systems rather than how to get AGI to work. Ideally, you want to know exactly what kind of motivational system your AGI should have before you (or anyone else) knows how to build an AGI.

    My personal estimate is veering towards "AGI won't come first", because a number of clever people like Robin Hanson and the guys at the Future of Humanity Institute think whole brain emulation will come first, and have good arguments for that conclusion. However, we should be ready for the contingency that AGI really gets going, which is why I think that Eliezer & SingInst are doing such a valuable job.

    So, I would modify Aron's request to: If we take for granted that Friendly AI is so hard we can't get there in one step, we have to start looking at what steps we can take today that are productive. What productive steps towards FAI can we take today?

    Disclaimer: perhaps the long-standing members of this blog understand the following question and may consider it impertinent. Sincerely, I am just confused (as I think anyone going to the Singularity site would be).

    When I visit this page describing the "team" at the Singularity Institute, it states that Ben Goertzel is the "Director of Research", and Eliezer Yudkowsky is the "Research Fellow". EY states (above); "I was not working with Ben on AI, then or now." What actually goes on at SIAI?

    Eliezer, if the US government announced a new Manhattan Project-grade attempt to be the first to build AGI, and put you in charge, would you be able to confidently say how such money should be spent in order to make genuine progress on such a goal?

    Ben does outside research projects like OpenCog, since he knows the field and has the connections, and is titled "Research Director". I bear responsibility for SIAI in-house research, and am titled "Research Fellow" because I helped found SIAI and I consider it nobler not to give myself grand titles like Grand Poobah.

    Silas, I would confidently say, "Oh hell no, the last thing we need right now is a Manhattan Project. Give me $5 million/year to spend on 10 promising researchers and 10 promising students, and maybe $5 million/year to spend on outside projects that might help, and then go away. If you're lucky we'll be ready to start coding in less than a decade."

    I think what goes on at SIAI is that Eliezer writes blog posts. ;)

    Re: a number of clever people like Robin Hanson and the guys at the Future of Humanity Institute think whole brain emulation will come first, and have good arguments for that conclusion.

    What? Where are these supposedly good arguments, then? Or do you mean the crack of a future dawn material?

    EY:Give me $5 million/year to spend on 10 promising researchers and 10 promising students, and maybe $5 million/year to spend on outside projects that might help, and then go away. If you're lucky we'll be ready to start coding in less than a decade."

    I am contacting the SIAI today to see whether they have some role I can play. If my math is correct, you need $100 million dollars, and 20 selected individuals. If the money became available, do you have the individuals in mind? Would they do it?

    I'll be 72 in 10 years when the coding starts; how long will that take? Altruism be damned, remember my favorite quote: "I don't want to achieve immortality through my work. I want to achieve it through not dying. (W. Allen)

    Retired, are you signed up for cryonics?

    No, I don't have 20 people in mind. And I don't need that full amount, it's just the most I can presently imagine myself managing to use.

    EY: email me. I have a donor in mind.

    I will, but it looks from your blog like you're already talking to Michael Vassar. I broadcast to the world, Vassar handles personal networking.

    Tim: "What? Where are these supposedly good arguments, then? Or do you mean the crack of a future dawn material?"

    • Anders Sandberg argues that brain scanning techniques using a straightforward technology (slicing and electron microscopy) combined with moore's law will allow us to do WBE on a fairly predictable timescale.

    Someone should write a "creating friendly uploads", but a first improvement over uploading then enhancing a single human would be uploading that human ten times and enhancing all ten copies in different ways so as to mitigate some possible insanity scenarios.

    Vassar handles personal networking? Dang, then I probably shouldn't have mouthed off at Robin right after he praised my work.

    Steven, I think both Toby Ord and separately Anna Salamon are working along those lines.

    Interesting.

    I guess that under the plausible (?) assumption that at least one enhancement strategy in a not too huge search space reliably produces friendly superintelligences, the problem reduces from creating to recognizing friendliness? Even so I'm not sure that helps.

    I would start by assuming away the "initially nice people" problem and ask if the "stability under enhancement" problem was solvable. If it was, then I'd add the "initially nice person" problem back in.

    If like me you don't expect the first upload to all by itself rapidly become all powerful, you don't need to worry as much about upload friendliness.

    [I] am titled "Research Fellow" because I helped found SIAI and I consider it nobler not to give myself grand titles like Grand Poobah.

    It seems to me that the titles "Director of Research" and "Executive Director" give the holders power over you, and it is not noble to give other people power over you in exchange for dubious compensation, and the fact that Ben's track record (and doctorate?) lend credibility to the organization strikes me as dubious compensation.

    Example: the holders of the titles might have the power to disrupt your scientific plans by bringing suit claiming that a technique or a work you created and need is the intellectual property of the SI.

    AI IS HARD. IT'S REALLY FRICKING HARD.

    Hundreds of blog posts and still no closer!

    Re: Anders Sandberg argues that brain scanning techniques using a straightforward technology (slicing and electron microscopy) combined with Moore's law will allow us to do WBE on a fairly predictable timescale.

    Well, that is not unreasonable - though it is not yet exactly crystal clear which brain features we would need to copy in order to produce something that would boot up. However, that is not a good argument for uploads coming first. Any such an argument would necessarily compare upload and non-upload paths. Straightforward synthetic intelligence based on engineering principles seems likely to require much less demanding hardware, much less in the way of brain scanning technology - and much less in the way of understanding what's going on.

    The history of technology does not seem to favour the idea of AI via brain scanning to me. A car is not a synthetic horse. Calculators are not electronic abacuses. Solar panels are not technological trees. Big Blue was not made of simulated neurons.

    It's not clear that we will ever bother with uploads - once we have AI. It will probably seem like a large and expensive engineering project with dubious benefits.

    @ Tim Tyler:

    I'm not sure how we can come to rational agreement on the relative likelihoods of WBE vs AGI being developed first. I am not emotionally committed to either view, but I'd very much like to get my hands on what factual information we have.

    I suspect that WBE should be considered the favorite at the moment, because human economic systems (the military, private companies, universities) like to work on projects where one can show incremental progress, and WBE/BCI has this property. There have recently been news stories about the US military engaging in brain simulation of a cat, and of them considering BCI technology to keep up with other military powers.

    I think it's likely that we will understand AGI well enough on the WBE track, even if AGI is not developed independently before that, and as a result this understanding will be implemented before WBE sorts out all the technical details and reaches its goal. So, even if it's hard to compare the independent development of these paths, dependent scenario leads to conclusion that AGI will likely come before WBE.

    "AI IS HARD."

    While it is apparent when something is flying, it is by no means clear what constitutes the "I" of "AI". The comparison with flight should be banned from all further AI discussions.

    I anticipate definition of "I" shortly after "I" is created. Perhaps, as is so often done in IT projects, managers will declare victory, force the system upon unwilling users and pass out T-shirts bearing: "AI PER MANDATUM" (AI by mandate).

    Or perhaps you have a definition of "I"?

    @Aron, wow, from your initial post I thought I was giving advice to an aspiring undergraduate, glad to realize I'm talking to an expert :-)

    Personally I continually bump up against performance limitations. This is often due to bad coding on my part and the overuse of Matlab for loops but I still have the strong feeling that we need faster machines. In particular, I think full intelligence will require processing VAST amounts of raw unlabelled data (video, audio, etc) and that will require fast machines. The application of statistical learning techniques to vast unlabeled data streams is about to open new doors. My take on this idea is spelled out better here.

    Anyone have any problems with defining intelligence as simply "mental ability"? People are intelligent in different ways, in accordance with their mental abilities, and IQ tests measure different aspects of intelligence by measuring different mental abilities.

    I define intelligence much more generally. I think that an entity is intelligent to the extent that it is able to engage in goal-directed activity, with respect to some environment. By this definition, fish and insects are intelligent. Humans, more so. "Environment" can be as general as you like. For example, it can include the temporal dimension. Or it might be digital. A machine that can detect a rolling ball, compute its path, and intercept it is intelligent.

    Aspects of human intelligence, such as language, and the ability to model novel environments, serve the end of goal-directed activity. I think the first-person view ('consciousness') is "real", but it is also subservient to the end of goal-directed activity. I think that as definitions go, one has got to start there, and build up and out. As Caledonian points out, this could also apply to construction plans.

    Andy Wood,
    Why the goal criterion? Every creature might be said to be engaging in goal-directed activity without actually having said goal. Also, what if the very goal of intercepting the ball is not intelligent?

    Admittedly, the "mental" aspect of "mental ability" might be difficult to apply to computers. Perhaps it would be an improvement to say intelligence is cognitive ability or facility. Mental abilities can take many forms and can be used in pursuit of many goals, but I think it is the abilities themselves which constitute intelligence. One who has better "mental abilities" will be better at pursuing their goals - whatever they might be - and indeed, better at determining which goals to pursue.

    If a creature engages in goal-directed activity, then I call it intelligent. If by "having said goal" you mean "consciously intends it", than I regard the faculties for consciously intending things as a more sophisticated means for aiming at goals. If intercepting the ball is characterized (not defined) as "not intelligent", that is true relative to some other goal that supercedes it.

    I'm basically asserting that the physical evolution of a system towards a goal, in the context of an environment, is what is meant when one distinguishes something that is "intelligent" from something (say, a bottle) that is not. Here, it is important to define "goal" and "environment" very broadly.

    Of course, people constantly use the word "intelligence" to mean something more complicated, and higher-level. So, someone might say that a human is definitely "intelligent", and maybe a chimp, but definitely not a fly. Well, I think that usage is a mistake, because this is a matter of degree. I'm saying that a fly has the "I" in "AI", just to a lesser degree that a human. One might argue that the fly doesn't make plans, or use tools, or any number of accessories to intelligence, but I see those faculties as upgrades that raise the degree of intelligence, rather than defining it.

    Before you start thinking about "minds" and "cognition", you've got to think about machinery in general. When machinery acquires self-direction (implying something toward which it is directed), a qualitative line is crossed. When machinery acquires faculties or techniques that improve self-direction, I think that is more appropriately considered quantitative.

    Andy,
    I get what you're saying, and I actually think most people would agree that a fly has a degree of intelligence, just not much. There is merit in your point about goals.

    Before you start thinking about "minds" and "cognition", you've got to think about machinery in general.

    I thought that's what I was doing. If you look at the "machinery" of intelligence, you find various cognitive faculties, AKA "mental abilities." The ability to do basic math is a cognitive faculty which is necessary for the pursuit of certain goals, and a factor in intelligence. The better one is at math, the better one is at pursuing certain goals, and the more intelligent one is in certain ways. Same for other faculties.

    How would you define self-direction? I'm not sure a fly has self-direction, though it can be said to have a modicum of intelligence. Flies act solely on instinct, no? If they're just responding automatically to their environment based on their evolved instincts, then in what sense do they have self-direction?

    Previous comment of mine contains an error. Apparently Eliezer_1996 did go on the record as saying that, given a hundred million dollars per year, he would have a "reasonable chance" of doing it in nine years. He was thinking of brute-forcing AI via Manhattan Project and heuristic soup, and wasn't thinking about Friendliness at all, but still.

    As my name has come up in this thread I thought I'd briefly chime in. I do believe it's reasonably likely that a human-level AGI could be created in a period of, let's say, 3-10 years, based on the OpenCogPrime design (see http://opencog.org/wiki/OpenCog_Prime). I don't claim any kind of certitude about this, it's just my best judgment at the moment.

    So far as I can recall, all projections I have ever made about the potential of my own work to lead to human-level (or greater) AGI have been couched in terms of what could be achieved if an adequate level of funding were provided for the work. A prior project of mine, Webmind, was well-funded for a brief period, but my Novamente project (http://novamente.net) never has been, and nor is OpenCogPrime ... yet.

    Whether others involved in OpenCogPrime work agree closely with my predictive estimates is really beside the point to me: some agree more closely than others. We are involved in doing technical research and engineering work according to a well-defined plan (aimed explicitly at AGI at the human level and beyond), and the important thing is knowing what needs to be done, not knowing exactly how long it will take. (If I found out my time estimate were off by a factor of 5, I'd still consider the work roughly equally worthwhile. If I found out it were off by a factor of 10, that would give me pause, and I would serious consider devoting my efforts to developing some sort of brain scanning technology, or quantum computing hardware, or to developing some totally different sort of AGI design).

    I do not have a mathematical proof that the OpenCogPrime design will work for human-level AGI at all, nor a rigorous calculation to support my time-estimate. I have discussed the relevant issues with many smart, knowledgeable people, but ultimately, as with any cutting-edge research project, there is a lot of uncertainty here.

    I really do not think that my subjective estimate about the viability of the OpenCogPrime AGI design is based on any kind of simple cognitive error. It could be a mistake, but it's not a naive or stupid mistake!

    In order to effectively verify or dispute my hypothesis that the OpenCogPrime design (or the Novamente Cognition Engine design: they're similar but not identical) is adequate for human-level AGI, with a reasonable level of certitude, Manhattan Project level funding would not be required. US $10M per year for a decade would be ample; and if things were done very carefully without too much bad luck, we might be able to move the project full-speed-ahead on as little as US $1.5 M per year, and achieve amazing results within as little as 3 years.

    Hell, we might be able to get to the end goal without ANY funding, based on the volunteer efforts of open-source AI developers, though this seems a particularly difficult path, and I think the best course will be to complement these much-valued volunteer efforts with funded effort.

    Anyway, a number of us are working actively on the OpenCogPrime project now (some funded by SIAI, some by Novamente LLC, some as volunteers) even without an overall "adequate" level of funding, and we're making real progress, though not as much as we'd like.

    Regarding my role with SIAI: as Eliezer stated in this thread, he and I have not been working closely together so far. I was invited into SIAI to, roughly speaking, develop a separate AGI research programme which complements Eliezer's but is still copacetic with SIAI's overall mission. So far the main thing I have done in this regard is to develop the open-source OpenCog (http://opencog.org) AGI sofware project of which OpenCogPrime is a subset.

    [-][anonymous]12y10

    Are there some kind of "Envisioning fallacy" that generalizes this?

    I have seen myself and others fall prey to a sense of power that is readily difficult to describe when discussing topics as diverse as physics simulation, conways game of life-derivatives and automatic equation derivation.

    And I have observed myself to once have this very same sense of power when I thought about some graphs probabilistically weighted edges and how a walker on this graph would be able to interpret data and then make AI (It was a bit more complicated and smelled like an honest attempt, but there was definitely black boxes there).

    [This comment is no longer endorsed by its author]Reply

    I think that the skill needed to avoid Fake Reductions is similar to the skill needed to program a computer (although at a much higher level). Students who are learning to program often call their variables certain names, which lets humans understand what they are for, and assume that the computer will understand as well. To get past this, they must learn that the program needs to have all the algorithm put inside. An English explanation of an algorithm piggybacks on your internal understanding of it. When reading the English, you can use that understanding, but the computer doesn't have any access to it.

    In a nutshell, they need to go past labels and understand what structure the label is referring to.