Shared AI Wins

Almost every new technology comes at first in a dizzying variety of styles, and then converges to what later seems the "obvious" configuration.  It is actually quite an eye-opener to go back and see old might-have-beens, from steam-powered cars to pneumatic tube mail to Memex to Englebart’s computer tools.  Techs that are only imagined, not implemented, take on the widest range of variations.  When actual implementations appear, people slowly figure out what works better, while network and other scale effects lock-in popular approaches.  As standards congeal, competitors focus on smaller variations around accepted approaches.  Those who stick with odd standards tend to be marginalized. 

Eliezer says standards barriers are why AIs would "foom" locally, with one AI quickly growing from so small no one notices, to so powerful it takes over the world:

I also don’t think this [scenario] is allowed: … knowledge and even skills are widely traded in this economy of AI systems. In concert, these AIs, and their human owners, and the economy that surrounds them, undergo a collective FOOM of self-improvement.  No local agent is capable of doing all this work, only the collective system. …  [The reason is that] trading cognitive content around between diverse AIs is more difficult and less likely than it might sound.  Consider the field of AI as it works today.  Is there any standard database of cognitive content that you buy off the shelf and plug into your amazing new system, whether it be a chessplayer or a new data-mining algorithm? …


The diversity of cognitive architectures acts as a tremendous barrier to trading around cognitive content. … If two AIs both see an apple for the first time, and they both independently form concepts about that apple … their thoughts are effectively written in a different language. … The barrier this opposes to a true, cross-agent, literal "economy of mind", is so strong, that in the vast majority of AI applications you set out to write today, you will not bother to import any standardized preprocessed cognitive content.  It will be easier for your AI application to start with some standard examples – databases of that sort of thing do exist, in some fields anyway – and redo all the cognitive work of learning on its own. …  Looking over the diversity of architectures proposed at any AGI conference I’ve attended, it is very hard to imagine directly trading cognitive content between any two of them.

But of course "visionaries" take a wide range of incompatible approaches. Commercial software tries much harder to match standards and share sources.  The whole point of CYC was that AI researchers neglect compatibility and sharing because they are more interested in writing papers than making real systems.  The idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy.  You couldn’t build an effective cell or ecosystem or developed economy or most any complex system that way either – such things require not just good structure but also lots of good content.  Loners who start all over from scratch rarely beat established groups sharing enough standards to let them share improvements to slowly accumulate content.   

CYC content may or may not jump-start a sharing AI community, but AI just won’t happen without a whole lot of content.  If ems appear first, perhaps shareable em contents could form a different basis for shared improvements.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    The idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy.

    How do you know that? By analogy with historical facts?

    How is raw data not “content”?

  • http://yudkowsky.net/ Eliezer Yudkowsky

    It’s generally a terrible analogy, but would you say that a human baby growing up is getting “raw data” fed into the right architecture, or that human babies are exposed to data preprocessed by their parents, or that human babies get standardized data?

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    (More to the point, there are of course Solomonoff’s universal induction and AIXI, so, theoretically, you can have a learning agent that knows nothing about environment and yet in the end learns to give the right answers. No a priori reason to suppose that there is no efficient algorithm that goes along the same lines.)

  • http://jed.jive.com/ Jed Harris

    I’ve been interested in this discussion, but a bit frustrated because of structural problems with blog discussions. I’ll comment very briefly on that and then give a couple of comments on Robin’s and Eliezer’s positions respectively.

    The structural problem is that if you have comments that apply to large scale patterns in a blog, it’s very hard to find a place to put them (in the blog — of course you can put them on your own blog). So I’ve been watching this back and forth but haven’t seen a good “on topic” way to comment on the most basic issues. Now since Robin has pretty much raised them in this post I can do that.

    Regarding the possibility / likelihood of “singletons”: they do emerge, but I think only through network effects. The (almost entirely) universal DNA code, the (so far) universal internet protocols, etc. are all singletons in some important sense — once they “jelled” they made potential competitors (even ones that might have been better) impossible.

    However I can’t think of any examples where a single group grew to have universal power — that is, a single group became a (very large scale persistent) singleton. I also can’t think of any case where a group grew to control a majority of any large region or economy while access to crucial “secrets” that made that group powerful remained under the control of that group. This would be pretty much a requirement for the “single group becomes a singleton”.

    Groups that have have become very powerful generally have done so by exploiting network effects — e.g. Microsoft.

    If there are examples of uni-centric singletons, I’d really like to hear what they are. It would be very helpful to examine them in detail.

    Of course even if we can’t find any examples, this isn’t a logical proof that uni-centric singletons are impossible. But I think it does give us reasons to require a fairly strong argument in favor.

    So my working hypothesis is that far from the multi-centric AI “foom” scenario being “forbidden”, it is the only possible scenario for an AI “foom”. And furthermore, a necessary aspect of the “foom” is that we get (probably multiple layers of) lock in to exchange “standards” for cognitive content / structure via network effects. (Incidentally, there probably is no viable distinction between cognitive structure and content.)

    I’ll post another comment on Robin’s “ems” scenario.

  • http://jed.jive.com/ Jed Harris

    Regarding Robin’s “ems” scenario:

    First, I think this is also a case where only a multi-centric process could be successful, again with multiple levels of lock-in to standards for exchanging measurement and simulation techniques. Of course this is exactly what’s happening now in neuroscience and I see no reason to think it will suddenly change at some point.

    Second, the question of whether ems or general AI comes first is essentially trying to determine an inequality between two unknowns. So we aren’t going to be able to come up with a very strong answer.

    I personally think that simulating brains directly, without a very very strong set of theories about how they “work effectively” that let us “cheat” a lot, will take enormously more computing power than performing the equivalent functions “directly” in code.

    I believe this because there are good indications from neuroscience that simulations would have to go down to a very very low level to be accurate. For example, it seems that nerve cell memory is partly retained by fine grained chemistry including methylation of nuclear DNA. This implies that without cheating we’d have to directly emulate a significant amount of cellular chemistry at the level of individual molecules. It could be even worse than this; protein folding depends on the relative energy of multiple quantum states of the molecule, and we don’t know that this doesn’t enter into brain computations somehow.

    Of course possibly we could find good ways to cheat. But any cheating we can figure out would also help us think about large scale highly parallel computational systems with useful behavior that aren’t direct copies of brains — in other words, it would help us build AI systems. In fact I expect brain research will contribute a lot to AI.

    Another problem with simulating brains is that they aren’t very modular, and there is a lot of messy coupling across scales (memory using nuclear methylation is an example). This makes partial replicable success much much harder. All our examples of successful developments of complex systems have depended very heavily on modularity, and particularly on clean layering.

    AI on the other hand is pretty clearly just a large family of engineering problems. We have prototype self-driving cars, prototype artificial pack horses, pretty good OCR, mediocre speech transcription, decent spam filtering, learning helicopter control from expert pilots, etc. There are no visible barriers to making each of these arbitrarily good, and the technology created that way will transfer to related domains. This work is gradually building libraries of concepts, tools and eventually standards and modular building blocks. As these applications get more successful they’ll pull in more investment and research will go faster. All we need for eventual success is to keep improving these libraries and adding higher layers.

    So I’m reasonably confident that AI will arrive before ems, but I have to admit to a lot of uncertainty. I’m much more confident that whichever does come first, it will be multi-centric, not uni-centric.

  • http://hanson.gmu.edu Robin Hanson

    Vladimir, by “content” I mean stuff that takes work to create, and that is much more valuable than random raw data, and that resides in and makes sense because of a wider framework.

    Eliezer, a human baby certain gets raw data, and it has a good architecture too, but in addition I’d say it has lots of genetically encoded info about what sort of patterns in data to expect and attend to, i.e., what sort of abstractions to consider. In addition, when raising kids we focus their attention on relevant and useful patterns and abstractions. And of course we just tell them lots of stuff too.

    Jed, yes the first emulations may be very expensive, until we can see what cut-corners preserve function. And yes hand coded AI is “clearly just a large family of engineering problems”, but I don’t know why that makes it seem remotely doable anytime soon to you. I suspect you think we are further along now than I think.

  • http://jed.jive.com/ Jed Harris

    I’d also like to respond to a couple of points made by Eliezer in “Permitted Possibilities, & Locality”. In that post he explicitly resists the multi-centric AI “foom”.

    Regarding the human “economy of mind”: We’re very much embedded in a larger “economy of mind”. Individual humans can’t accomplish much without a community to train them and trade around ideas. So we can’t even get (what we think of as) human-like performance without a pretty thick web of human interaction.

    Absent further arguments I don’t currently see, I still think general AI could arise as an ability of a network of more partial intelligences.

    Eliezer’s point about the difficulty of trading cognitive content / structure raises much more interesting issues. Developing standards for externalizing and internalizing cognitive content / structure will certainly constrain development and impose some costs. But we can’t therefore rule it out, it is an engineering / economic tradeoff.

    This problem arises at all levels of system development and sometimes it is worth generating standards and sometimes it isn’t.

    Languages (both human and computer) are standards for externalizing content / structure.

    At this point lots of exchange occurs between different AI systems, but it is mostly mediated by human minds. The main exceptions are standard libraries for computing general useful intermediate results. Other than those libraries, the languages used for sharing content / structure are still specialized human languages (for example the way researchers talk in papers for NIPS / COLT). But just like any other area of system development, they’ll tend to crystallize out into machine-processable languages as soon as they stabilize enough to make that cost-effective, or often even before.

    I don’t see Eliezer’s warrant for his estimates of the cost of sharing content / structure. He seems to think this cost would be proportionately higher than for human researchers. While it is true that humans all have the same biological cognitive architecture, it is also true that they can’t introspect well, and can’t directly externalize their currently cognitive structure.

    Perhaps AI systems would simply write out some of their internal structure and other AIs would analyze it, or even just run it in a virtual machine (for safety, traceability, etc.) Humans read and run each others’ code all the time, and we are working in a diversity of languages and architectures. The process might be closer to bacteria swapping plasmids than our painful conversion of thought into language.

    So while there might be barriers to AI emerging as an economy (or better, ecology) of mind, I think this would need to be demonstrated, and it hasn’t been.

    Finally, a general comment on how the development of AI will tend to unfold. Right now system development is a mixed ecology of humans and (increasingly virtual) machines. We’re transferring activities from our minds to the machines as fast as we can. We’ve made a lot of progress in that direction since we started in the 1940s but obviously have a long way to go.

    It seems that we’re calling “general AI” the point at which humans no longer play a necessary role in this ecology. From the perspective of the ecology it won’t be general at all, there will still be just as many hard problems, but they’ll be ones humans mostly can’t understand or help to solve.

    Attempting to restate Eliezer’s concern, he wants to make sure the resulting ecology still cares about the well being of humans, in some way very similar to the way we care about ourselves or each other (but presumably not the way the xenophobes care about the well being of “people not like them”, which to my mind is as much an expression of typical human values).

    I think this is a valid concern. Quite likely it will need to be addressed well before we get to general AI, if we want our virtual machines to help police the virtual and/or physical world, adjudicate disputes, manage resource allocation, etc.

    So I think this concern is important whether or not AI goes “foom” suddenly. I think working on it without the AI goes “foom” issue attached would help to engage a wider audience.

    In the context of this blog, perhaps we should note that this is an attempt to build a bias into the information ecology that it won’t want to overcome. Perhaps our goal should be to redesign bias, not eliminate it.

  • frelkins

    @Jed
    just a large family of engineering problems

    But this is exactly why the WBE paper seems nearly dispositive to me. Since reading it, I have searched for the last month in vain, talking to all kinds of people in all sorts of places, asking for a similar roadmap for direct, hand-coded AI.

    As an engineering project, we should expect to see a roadmap if the project is serious. Yet I cannot find one, nor can I find anyone who knows of one, has ever started one, can agree as to how one could even be started (since the field is so fragmented and very few of the many theorists can even appear to agree on the true nature of the problem), much less has thought to start one, can agree on who would be qualified to write a credible one, or would know how to find funding to start one.

    The Manhattan project began with an Einstein Letter, then the report of the Briggs Committee, and then of course the MAUD Committee chose the technology to pursue. Compton’s S-1 project group apparently staked out some plan of what would have to be accomplished. Fermi worked in parallel on the reactor at Met Lab – then the two streams come together into the Manhattan itself.

    Where is the comparable roadmap development for direct hand-coded AI? Please offer me credible URLs, thanks. Someone? Anyone?

  • http://jed.jive.com/ Jed Harris

    In response to Robin:

    yes the first emulations may be very expensive, until we can see what cut-corners preserve function.

    No doubt we’ll be inventing ways to cut the corners all the way along — we already are now. My point is that the ability to pick corners to cut, and to cut them reliably, implies a lot of detailed understanding of why the brain functions as well as it does. There’s no reason to think that understanding would apply only to simulating brains.

    And yes hand coded AI is “clearly just a large family of engineering problems”, but I don’t know why that makes it seem remotely doable anytime soon to you. I suspect you think we are further along now than I think.

    I don’t think AI is all that far along. We have some limited examples of success that may give us a way to judge the trend line.

    On the other hand brain scanning and emulation is also a “large family of engineering problems” so the question is which is larger / harder. Fully emulating biological systems seems hard. We can’t yet fully model even the neural network of Caenorhabditis elegans (302 neurons, with a completely known layout and genetic makeup).

    So I guess we’ll get to general AI first — though if we use “not quite general AI” to help us emulate brains we might get both at the same time. But I did and do acknowledge a lot of uncertainty about this.

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    Books and teachers can be accessed through the same “raw” data. It looks like your position is that recognizing as important and understanding preprocessed content requires complicated content-specific adaptations that themselves can’t be automatically/efficiently learned.

    It’s again a question of finding an efficient algorithm. It’s theoretically possible to learn without these content-specific adaptations (apart from some way of encoding utility), so we only need a truckload of content-specific mechanisms if it’s the most straightforward way to pass the capability threshold. I don’t see why it should be.

  • http://don.geddis.org/ Don Geddis

    @ frelkins: you ask where the roadmap is for hand-coded AI. Obviously, there is no such thing at this point, at least not with the kind of specificity the Manhattan Project had.

    That’s because AI is not “just” an engineering project. It isn’t “merely” the assembly of well-understood units, perhaps in some unique configuration, but with clearly predicted properties. Instead, there remains a lot of science to be done. Even with the computing power of all the world’s supercomputers combined; even allowing timescales of weeks or months to answer questions, instead of requiring real-time responses; even then, NOBODY (yet) knows how to code such a thing.

    At least with the Manhattan Project, it was fairly well understood before beginning, what kinds of energies are locked up inside of matter; the fact that fission (and fusion) liberates some of those energies, and the fact that a “chain reaction” should be possible, given sufficient critical mass. From there, it was “mere engineering” to deliver on the theoretical promise.

    AI is more like looking at a black box that does encryption. You know what the inputs are; you can see the outputs; you can look inside the box, and see that nothing magic is going on, it’s all just a collection of ordinary transistors.

    But you have no idea what the encryption algorithm is, you’ve never even imagined Public Key encryption, you don’t know about factoring prime numbers, etc.

    Is this “an engineering problem”? Someday. But not yet. Not yet.

  • mjgeddes

    CYC is way too narrow to be of any use, but the general idea was right.

    Sharing of content is the purpose of ontology of course, and any ‘language’ enabling a general framework for sharing of content would be the language of ontology, by definition.

    To my mind, getting a *universal parser* (capable of translation/sharing between any different high-level representations of valid concepts), is the very solution to the AGI problem. What do you think conscious is? Consciousness is precisely the interface mechanism for switching between different kinds of high-level representations, which enables sharing of cognitive content!

    All you need is the ‘levers’ for parsing and switching between knowledge domains (ontology merging). Academia has already created all the different bits of underlying individual machinary for us (or it will do so), it’s just that the bits are not integrated.

    EY always did grossly underestimate the deviousness of real hackers. You see we can already GET all the underlying machinery off other people, the levers are all we need

  • Marcello

    Robin says: “The idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy. You couldn’t build an effective cell or ecosystem or developed economy or most any complex system that way either – such things require not just good structure but also lots of good content.”

    A long time ago there was no life on this planet. There was nothing there that could deserve the name “content” or “knowledge”… then stuff happened… and today we live in a world rich with knowledge/content.

    So there must be at least one process that creates knowledge/content from raw data alone. Further, these processes can’t work by magic; if they work, there ought to be a reason. If this reason isn’t incomprehensibly complicated, somebody who understood it could potentially make a machine that takes advantage of that reason to create knowledge/content efficiently. I’m not necessarily claiming that such a machine would be able to do things like invent advanced nanotech without first building lots of what you would call content, but I am saying that it must be possible for a machine to create “content” by itself, because if not, we would live in a universe devoid of “content”.

  • http://jed.jive.com/ Jed Harris

    re Don Geddis’ comment

    AI is not “just” an engineering project. It isn’t “merely” the assembly of well-understood units, perhaps in some unique configuration, but with clearly predicted properties.

    I’ll agree if that’s your definition of engineering. I was thinking more in the sense of exploratory development (typical with major new software).

    If you look at how Sebastian Thrun’s group developed self-driving cars, or Andrew Ng’s group did synthesis of helicopter stunt controls from human examples, there don’t seem to be major conceptual breakthroughs, “just” a series of excellent new engineering ideas, well executed.

    The same kind of exploratory development has been driving us down the exponential improvement curve in digital hardware for forty years. It is far from predictable combination of existing units, but I’m not sure it has generated any major new scientific understanding.

    There are certainly lots of experiments and dead ends. So learning is an essential part of the process. We may look back and see some elegant abstractions that make all this simpler, but first we have to build the systems they can simplify. This happened with control theory, and I’m sure many other areas.

    I guess you could reasonably doubt that we could get to human level intelligence by piling up this kind of exploratory development. However note that is a different claim and needs a different argument. I actually do believe that Thrun’s and Ng’s work (and that of many other similar projects) can be built up over decades into human equivalent AI and I’d be interested in responding to arguments that it can’t.

  • Larry Lard

    Robin > The idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy

    Robin, Eliezer: this is your point of disagreement.

  • luzr

    “It’s generally a terrible analogy, but would you say that a human baby growing up is getting “raw data” fed into the right architecture, or that human babies are exposed to data preprocessed by their parents, or that human babies get standardized data?”

    I guess the problem there is that babies are known to be working… And it takes more than 10 years before you can decide the quality of result.

    What we face here is enginnering a baby that works. We can suppose that this baby will grow much faster than real babies do – but most likely only one baby out of thousands (or millions) will be found working. Therefore, we need faster teacher than humans.

  • Tim Tyler

    You need content – but we have a whole internet of content, mostly available for anyone – though of course only Google has access to certain important resources – such as Google Books. More than content, you need actuators that affect a world, and feedback about which actions are effective. For Google the actuators are its search results – and the feedback it gets consists of who clicks on which link. For traders, the actuators are investments, and the stock price provides feedback.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Robin: Eliezer, a human baby certain gets raw data, and it has a good architecture too, but in addition I’d say it has lots of genetically encoded info about what sort of patterns in data to expect and attend to, i.e., what sort of abstractions to consider. In addition, when raising kids we focus their attention on relevant and useful patterns and abstractions. And of course we just tell them lots of stuff too.

    This is much like my visualization of how an AI works, except that there’s substantially less “genetically encoded info” at the time you boot up the system – mostly consisting of priors that have to be encoded procedurally. This is work done by natural selection in the case of humans; so some of that is taken off your hands by programs that you write, and some of it is work you do at runtime over the course of the AI’s development, rather than trying to encode into the very first initial system. But you can’t exactly leave out Bayes’s Rule, or causal graphs, or modus ponens, from the first system.

    Jed: Developing standards for externalizing and internalizing cognitive content / structure will certainly constrain development and impose some costs. But we can’t therefore rule it out, it is an engineering / economic tradeoff.

    Just keep in mind that the Japanese Fifth Generation project, their mighty attempt to achieve serious Artificial Intelligence for the sake of national dominance, tried to standardize on logic programming.

    Jed: I guess you could reasonably doubt that we could get to human level intelligence by piling up this kind of exploratory development.

    You end up with very different AIs depending on whether you get there by piling up exploratory development or through insight. Both roads should be possible, since natural selection built humans without insight. But only AIs in the second class can be knowably Friendly, and I suspect that the proportion of worlds that survive the first type of AI development is tiny.

  • luzr

    Tim Tyler:

    “You need content – but we have a whole internet of content, mostly available for anyone – though of course only Google has access to certain important resources – such as Google Books. More than content, you need actuators that affect a world, and feedback about which actions are effective. For Google the actuators are its search results – and the feedback it gets consists of who clicks on which link. For traders, the actuators are investments, and the stock price provides feedback.”

    Correct. Indeed, mining internet content for knowledge is the obvious way how to start.

    Anyway, I believe that existing knowldge bases like Cyc or OpenCog can provide a good feedback that your “AI miner” gets the correct knowledge – and much faster than anything that involves human interaction.

    If you can develop algorithm that gets the same results just by scanning arbitrary text in internet as those that are hardcoded Cyc database, you are half-way there…

  • luzr

    “through insight”

    “But only AIs in the second class can be knowably Friendly, and I suspect that the proportion of worlds that survive the first type of AI development is tiny.”

    I guess most of those who disagree with you would welcome some sort of less vague explanation of both premises or perhaps some proofs.

    First, as you correctly state, we know that it is possible to build AI without insight. OTOH, there is nothing to support “insight” path.

    Second, it looks like humans are built without insight, but still are generally friendly to fellow humans. It looks like the key is that there is many other humans involved in the environment. It also appears that as intelligence grows, we generally tend to be MORE friendly to our fellow humans.

    So to sum it up, we know that a lot of minds created by blind process without insight seems to be quite friendly to other minds.

    What you seem to propose is that the only possible path leading to friendly AI is SINGLE mind created WITH INSIGHT. That is the EXACT OPPOSITE of what we know to work.

    And the only argument to support your thesis is the recursion theory – despite the fact that many of us see human civilization already living in tight recursion environment.

    You should not be surprised that some of us consider your theory somewhat ridiculous.

  • http://hanson.gmu.edu Robin Hanson

    All, the internet doesn’t really count as content to creatures that don’t know how to parse it and use it in reasoning. Mind content isn’t external scratching you puzzle over, it is internal resources structured and integrated to be useable in reasoning.

    Jed, understanding low level brain processes enough to aid em corner-cutting need not help much with understanding high level architecture.

    Marcello, yes of course feeding raw data into the right architecture could eventually produce human level intelligence; I meant it is fantasy to think this could take a reasonable time, relative to the option of making use of the content human minds now hold, which is our precious heritage.

    Eliezer, yes well-chosen priors are the key “encoded info.” There may be a misunderstanding that when I say “info” people think I mean direct facts like “Paris is capital of France”, while I instead mean any content within your architecture that helps you focus attention well. Clearly human babies do leave out Bayes’ Rule and modus ponens, but yes we should put that in if we can cleanly do so. I’d just claim that doesn’t get you very far; you’ll need to find a way to inherit big chunks of the vast human content heritage.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Robin, “Bayes’s Rule” doesn’t mean a little declarative representation of Bayes’s Rule, it means updating in response to evidence that seems more likely in one case than another. Hence “encoded procedurally”.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, yes babies clearly do approximately encode some implications of Bayes’ Rule, but also clearly fail to encode many other implications.

  • Tim Tyler

    the internet doesn’t really count as content to creatures that don’t know how to parse it and use it in reasoning.

    The idea is that organisms can learn. Like babies learn. You need some content to be a baby in the first place – but it doesn’t seem to be an enormous quantity.

  • http://cabalamat.wordpress.com/ Philip Hunt

    EY: would you say that a human baby growing up is getting “raw data” fed into the right architecture, or that human babies are exposed to data preprocessed by their parents, or that human babies get standardized data?

    None of the above.

    A young human is not just a passive recipient of data, but is interacting with it. It’s the interactions that are largely responsible for grow in the human’s intelligence.

    Experiments with kittens have demonstrated that interaction is important: http://books.google.com/books?id=etNd-4ppKv8C&pg=PA54&lpg=PA54&dq=kitten+visual+stimulation+experiment&source=web&ots=MiURgJJ4vB&sig=fuUVQewVRdKMScNQRXRISPyM_T8#PPA54,M1

  • Marcello

    Robin: Humans acquire information much faster than evolution. A smart human can acquire information faster than a dumb human. Humans themselves evolved intelligence recently, so I would guess that the design of new parts the human brain is probably as bad as, say, the design of the human spine. Even if evolution had had more time, we’re still talking about the process which wired our retinas the wrong way.

    In short, there are processes which acquire knowledge at vastly different efficiencies and even the most efficient one we know of shows many flaws. So is it really fantasy that it might be possible for something to build something which acquires the information much faster?

  • http://jed.jive.com/ Jed Harris

    I was too glib in skipping over Don Geddis’s comment that “there remains a lot of science to be done”. We may disagree in that he seems to feel we can do the science first and then the engineering, while I think we have to be doing engineering right along. But on reflection he is right that we need science.

    When writing my earlier response I was thinking we hadn’t produced anything in the computing and AI domain comparable to the Heisenberg uncertainty principle, Newton’s laws, etc. And perhaps we haven’t. But we have produced some insights that rise well above “just engineering”.

    Notably most of these insights are quite directly traceable to engineers working on a large set of related problems for decades, and sometimes beating their heads against a wall that the insight finally made visible. Note that many of the insights are negative.

    Here’s a quick sampling because I don’t have time to elaborate. Maybe we can discuss later if people want.

    • Information as a measurable quantity.
    • Turing’s incomputability results
    • Complexity hierarchy, and intractability proofs for various flavors of reasoning and search
    • Search and optimization as basic elements of AI systems
    • Komolgorov entropy / maximum entropy / minimum description length
    • Switch from logic to statistical learning as the conceptual language of AI
    • Use of population / evolutionary methods and analysis

    So I agree with Eliezer and Don that insight is required. I think if we had tried to just “muddle through” without these insights we’d be progressing very slowly, if at all.

    Conversely however I think that insight generally comes from accumulated engineering examples (successful and unsuccessful) that outline the issue to be understood, the way flour in the air of a garage can show what invisible animal is present (if any).

    So after reflection if we have any disagreement, it is about how to get to insight.

  • http://jed.jive.com/ Jed Harris

    In response to Robin:

    understanding low level brain processes enough to aid em corner-cutting need not help much with understanding high level architecture.

    Certainly this could be true given what we know now, but I’m pretty confident that it is unlikely, based on a fairly large number of examples of how people are trying and the tools they need.

    I guess to you it seems likely but I don’t know why.

    If we want to pursue this probably the only way to pin down where we diverge is to get into the specifics of how we judge where the probability mass is in this domain. I can’t do that right now but I’m willing if you want to later.

  • Pingback: AI Foom Debate: Post 41 – 45 | wallowinmaya