29 Comments

Wouldn't security risks also be a large barrier to the sharing of raw cognitive content. Verifying that raw cognitive content does not contain any malicious tricks stuck in by a smart adversary is not necessarily easy.

The idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy.Doesn't the existence of the AIXI algorithm disagree? I don't doubt that to be a good doctor, you need to know about human biology, I just don't see why you can't get that info out of raw medical scans and DNA sequences.

If you insist the AI needs some extra cognitive content, where does that come from. Why can't whatever device produces it be part of the AI.

Expand full comment

In response to Robin:

understanding low level brain processes enough to aid em corner-cutting need not help much with understanding high level architecture. Certainly this could be true given what we know now, but I'm pretty confident that it is unlikely, based on a fairly large number of examples of how people are trying and the tools they need.

I guess to you it seems likely but I don't know why.

If we want to pursue this probably the only way to pin down where we diverge is to get into the specifics of how we judge where the probability mass is in this domain. I can't do that right now but I'm willing if you want to later.

Expand full comment

I was too glib in skipping over Don Geddis's comment that "there remains a lot of science to be done". We may disagree in that he seems to feel we can do the science first and then the engineering, while I think we have to be doing engineering right along. But on reflection he is right that we need science.

When writing my earlier response I was thinking we hadn't produced anything in the computing and AI domain comparable to the Heisenberg uncertainty principle, Newton's laws, etc. And perhaps we haven't. But we have produced some insights that rise well above "just engineering".

Notably most of these insights are quite directly traceable to engineers working on a large set of related problems for decades, and sometimes beating their heads against a wall that the insight finally made visible. Note that many of the insights are negative.

Here's a quick sampling because I don't have time to elaborate. Maybe we can discuss later if people want. <ul><li>Information as a measurable quantity. </li><li>Turing's incomputability results</li><li>Complexity hierarchy, and intractability proofs for various flavors of reasoning and search</li><li>Search and optimization as basic elements of AI systems</li><li>Komolgorov entropy / maximum entropy / minimum description length</li><li>Switch from logic to statistical learning as the conceptual language of AI</li><li>Use of population / evolutionary methods and analysis</li></ul>So I agree with Eliezer and Don that insight is required. I think if we had tried to just "muddle through" without these insights we'd be progressing very slowly, if at all.

Conversely however I think that insight generally comes from accumulated engineering examples (successful and unsuccessful) that outline the issue to be understood, the way flour in the air of a garage can show what invisible animal is present (if any).

So after reflection if we have any disagreement, it is about how to get to insight.

Expand full comment

Robin: Humans acquire information much faster than evolution. A smart human can acquire information faster than a dumb human. Humans themselves evolved intelligence recently, so I would guess that the design of new parts the human brain is probably as bad as, say, the design of the human spine. Even if evolution had had more time, we're still talking about the process which wired our retinas the wrong way.

In short, there are processes which acquire knowledge at vastly different efficiencies and even the most efficient one we know of shows many flaws. So is it really fantasy that it might be possible for something to build something which acquires the information much faster?

Expand full comment

EY: would you say that a human baby growing up is getting "raw data" fed into the right architecture, or that human babies are exposed to data preprocessed by their parents, or that human babies get standardized data?

None of the above.

A young human is not just a passive recipient of data, but is interacting with it. It's the interactions that are largely responsible for grow in the human's intelligence.

Experiments with kittens have demonstrated that interaction is important: http://books.google.com/boo...

Expand full comment

the internet doesn't really count as content to creatures that don't know how to parse it and use it in reasoning.

The idea is that organisms can learn. Like babies learn. You need some content to be a baby in the first place - but it doesn't seem to be an enormous quantity.

Expand full comment

Eliezer, yes babies clearly do approximately encode some implications of Bayes' Rule, but also clearly fail to encode many other implications.

Expand full comment

Robin, "Bayes's Rule" doesn't mean a little declarative representation of Bayes's Rule, it means updating in response to evidence that seems more likely in one case than another. Hence "encoded procedurally".

Expand full comment

All, the internet doesn't really count as content to creatures that don't know how to parse it and use it in reasoning. Mind content isn't external scratching you puzzle over, it is internal resources structured and integrated to be useable in reasoning.

Jed, understanding low level brain processes enough to aid em corner-cutting need not help much with understanding high level architecture.

Marcello, yes of course feeding raw data into the right architecture could eventually produce human level intelligence; I meant it is fantasy to think this could take a reasonable time, relative to the option of making use of the content human minds now hold, which is our precious heritage.

Eliezer, yes well-chosen priors are the key "encoded info." There may be a misunderstanding that when I say "info" people think I mean direct facts like "Paris is capital of France", while I instead mean any content within your architecture that helps you focus attention well. Clearly human babies do leave out Bayes' Rule and modus ponens, but yes we should put that in if we can cleanly do so. I'd just claim that doesn't get you very far; you'll need to find a way to inherit big chunks of the vast human content heritage.

Expand full comment

"through insight"

"But only AIs in the second class can be knowably Friendly, and I suspect that the proportion of worlds that survive the first type of AI development is tiny."

I guess most of those who disagree with you would welcome some sort of less vague explanation of both premises or perhaps some proofs.

First, as you correctly state, we know that it is possible to build AI without insight. OTOH, there is nothing to support "insight" path.

Second, it looks like humans are built without insight, but still are generally friendly to fellow humans. It looks like the key is that there is many other humans involved in the environment. It also appears that as intelligence grows, we generally tend to be MORE friendly to our fellow humans.

So to sum it up, we know that a lot of minds created by blind process without insight seems to be quite friendly to other minds.

What you seem to propose is that the only possible path leading to friendly AI is SINGLE mind created WITH INSIGHT. That is the EXACT OPPOSITE of what we know to work.

And the only argument to support your thesis is the recursion theory - despite the fact that many of us see human civilization already living in tight recursion environment.

You should not be surprised that some of us consider your theory somewhat ridiculous.

Expand full comment

Tim Tyler:

"You need content - but we have a whole internet of content, mostly available for anyone - though of course only Google has access to certain important resources - such as Google Books. More than content, you need actuators that affect a world, and feedback about which actions are effective. For Google the actuators are its search results - and the feedback it gets consists of who clicks on which link. For traders, the actuators are investments, and the stock price provides feedback."

Correct. Indeed, mining internet content for knowledge is the obvious way how to start.

Anyway, I believe that existing knowldge bases like Cyc or OpenCog can provide a good feedback that your "AI miner" gets the correct knowledge - and much faster than anything that involves human interaction.

If you can develop algorithm that gets the same results just by scanning arbitrary text in internet as those that are hardcoded Cyc database, you are half-way there...

Expand full comment

Robin: Eliezer, a human baby certain gets raw data, and it has a good architecture too, but in addition I'd say it has lots of genetically encoded info about what sort of patterns in data to expect and attend to, i.e., what sort of abstractions to consider. In addition, when raising kids we focus their attention on relevant and useful patterns and abstractions. And of course we just tell them lots of stuff too.

This is much like my visualization of how an AI works, except that there's substantially less "genetically encoded info" at the time you boot up the system - mostly consisting of priors that have to be encoded procedurally. This is work done by natural selection in the case of humans; so some of that is taken off your hands by programs that you write, and some of it is work you do at runtime over the course of the AI's development, rather than trying to encode into the very first initial system. But you can't exactly leave out Bayes's Rule, or causal graphs, or modus ponens, from the first system.

Jed: Developing standards for externalizing and internalizing cognitive content / structure will certainly constrain development and impose some costs. But we can't therefore rule it out, it is an engineering / economic tradeoff.

Just keep in mind that the Japanese Fifth Generation project, their mighty attempt to achieve serious Artificial Intelligence for the sake of national dominance, tried to standardize on logic programming.

Jed: I guess you could reasonably doubt that we could get to human level intelligence by piling up this kind of exploratory development.

You end up with very different AIs depending on whether you get there by piling up exploratory development or through insight. Both roads should be possible, since natural selection built humans without insight. But only AIs in the second class can be knowably Friendly, and I suspect that the proportion of worlds that survive the first type of AI development is tiny.

Expand full comment

You need content - but we have a whole internet of content, mostly available for anyone - though of course only Google has access to certain important resources - such as Google Books. More than content, you need actuators that affect a world, and feedback about which actions are effective. For Google the actuators are its search results - and the feedback it gets consists of who clicks on which link. For traders, the actuators are investments, and the stock price provides feedback.

Expand full comment

"It's generally a terrible analogy, but would you say that a human baby growing up is getting "raw data" fed into the right architecture, or that human babies are exposed to data preprocessed by their parents, or that human babies get standardized data?"

I guess the problem there is that babies are known to be working... And it takes more than 10 years before you can decide the quality of result.

What we face here is enginnering a baby that works. We can suppose that this baby will grow much faster than real babies do - but most likely only one baby out of thousands (or millions) will be found working. Therefore, we need faster teacher than humans.

Expand full comment

Robin > The idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy

Robin, Eliezer: this is your point of disagreement.

Expand full comment

re Don Geddis' comment

AI is not "just" an engineering project. It isn't "merely" the assembly of well-understood units, perhaps in some unique configuration, but with clearly predicted properties. I'll agree if that's your definition of engineering. I was thinking more in the sense of exploratory development (typical with major new software).

If you look at how Sebastian Thrun's group developed self-driving cars, or Andrew Ng's group did synthesis of helicopter stunt controls from human examples, there don't seem to be major conceptual breakthroughs, "just" a series of excellent new engineering ideas, well executed.

The same kind of exploratory development has been driving us down the exponential improvement curve in digital hardware for forty years. It is far from predictable combination of existing units, but I'm not sure it has generated any major new scientific understanding.

There are certainly lots of experiments and dead ends. So learning is an essential part of the process. We may look back and see some elegant abstractions that make all this simpler, but first we have to build the systems they can simplify. This happened with control theory, and I'm sure many other areas.

I guess you could reasonably doubt that we could get to human level intelligence by piling up this kind of exploratory development. However note that is a different claim and needs a different argument. I actually do believe that Thrun's and Ng's work (and that of many other similar projects) can be built up over decades into human equivalent AI and I'd be interested in responding to arguments that it can't.

Expand full comment