45 Comments

There is a lot of overhead involved in training an machine learning (ML) system to understand a problem and solve it well. For most problems it's not efficient because our brains are already like ML systems that know how to solve the given problem, and the info for how to solve the problem is easily expressible.

To take a simple example: imagine writing a program to calculate the area of geometric shapes. You could either train an ML system to understand the general concepts of shape and area, or you could just take a couple minutes to enter in some formulas. The reason we don't try to use ML to solve all of our design problems now is similar to the reason we would directly code this program.

Our brain is a kind of cache, representing some fraction of the intelligence embedded in all the data encountered by our ancestors, and all the data we've seen in our lives. Similarly, any new ML system will be a sort of cached intelligence. We will eventually see AIs doing all design work (either ems, or machine learned ones), but that doesn't mean we'll learn a new system for every problem. Just like animals don't grow a new brain to solve every problem. Doing that would be super inefficient and difficult, compared to the "use the existing cache" solution.

I agree our society is more powerful than evolution. If the competition is between our society hand-coding an AI vs. evolution creating another species at least as smart as humans, then it's no contest, I'll bet on our hand-coding AI.

Deep learning is not just a virtual clone of how evolution works. We can take advantage of our 'industrial' abilities to create better processes for turning data into intelligence. We can also run ML algorithms far faster than evolution.

The situation isn't our society vs. evolution, but our society using hand-coding techniques vs. our society using 'feed lots of data to an intelligently selected learning algorithm' techniques.

ML has basically taken over AI and demonstrated better results than hand-coding on a wide range of AI-related problems. If we want to look at history to extrapolate what techniques will lead to general AI, looking at these problems is a lot more relevant than looking at much simpler industrial problems. This is especially true given that we've only had enough computing power and knowledge to do ML somewhat well for the past ~10 years.

Expand full comment

It isn't worth it to use this approach for every type of problem because there is lots of overhead involved. In some cases it will be worth it, in many cases it won't be.

Most computer systems people build are very simple in comparison to brains. If you're writing a program to calculate the area of a geometrical shape, it's overkill to try to train a neural net to understand the general concepts of shape and area, since you already understand them (thanks to evolution and to all the learning your brain has absorbed since you came into existence) and the knowledge is already represented very compactly in a way that you can transfer to a computer in a few minutes.

Think of a brain or an AI system as a sort of cache that represents some fraction of the intelligence embedded in lots of raw data. Once this 'intelligence cache' exists, using it for problems similar to those it's good at solving can be way more efficient than creating a new cache. When you start encountering sufficiently hard or different problems, a new cache may be warranted.

I think AI will eventually take over all design work (either ems, as Robin writes about, or machine learned systems as I think is more likely), but we won't create a new ML systems for each design problem just as animals don't generate new brains for each problem they face.

I agree, our society is way more powerful than evolution. I am not saying evolution is the most efficient way of creating intelligent systems. We can use some principles from evolution and combine them with our own techniques to create better systems. Deep learning isn't just virtual evolution. What they share is the creation of a very complex system by starting with a simple learning architecture, feeding it lots of raw data, and allowing it to adapt to capture the intelligence embedded in this data.

Maybe there's a smarter "industrial" way to create general AI, but I don't see it, and I think the path I described in my original post will arrive sooner than ems.

Expand full comment

Nothing did, I just used the wrong word. Oops.

Expand full comment

Not sure what made you think I'm big on an evolutionary relative to industrial approach.

Expand full comment

Two problems I can see with this argument:

- It seems like it should apply to everything we might want to build, not just brains. Most research should be moving towards setting up an initial simulated environment and letting a genetic algorithm find the best design to solve the problem we're wanting to address. I'm sure you can point me to some particular cases where this kind of approach has been introduced and used successfully - I can think of some examples myself - but for this to be the new dominant paradigm, rather than just another tool in our toolbox, will require more evidence than that. Maybe you think that we will in fact see this approach taking over all design work, but just not for a while yet?

- Paging Citing Robin on the handful of discrete growth modes of humans and prehuman life: it seems like an industrial society is vastly more powerful than evolution. And this seems to hold up to simple observation too, when you look at how damn much we've accomplished over the last few hundred years compared to what evolution took billions of years to produce. While I do think there are probably benefits to the 'evolutionary approach', i.e. progressing via many small advancements which must each be justified in their own right, I'm doubtful that using it to the exclusion of everything else is going to turn out to be the most effective approach to building systems when historically the opposite seems to have been true.

Expand full comment

A recursion eight levels deep:http://lesswrong.com/lw/2pv...

Expand full comment

A key insight from the triumph of machine learning over "Good Old Fashioned AI" is that intelligence arises from data, not code (at least, for the kind of intelligence we can create now). In some sense the structure of the machine learning models that arise from training on lots of data is "code", but that complicated code falls out of a a pretty general training method combined with data.

The complicated content in our brains is a result of a pretty dumb process (natural selection) involving lots and lots of data and some simple criteria for when one organism is better than another and how organisms can mutate.

The main advantage of evolution over our machine learning techniques is that it has had billions of years over which to operate, combined with an "environment" (reality) which is very rich and detailed (hence dealing with it benefits from lots of content).

If we could quickly simulate detailed environments on computers, we could reproduce this process. Computers can simulate simple environments and generate lots of data from them very quickly. As environments get more complex simulating them gets more expensive. So a key question is: what is the level of environment detail/structure that offers the best [ease/quickness of simulation] / [rich enough to yield capable intelligence] tradeoff. It could be far below the level of detail of reality.

For instance, it seems clear that if somehow humans had evolved in an environment where Newtonian physics was true and quantum / relativistic effects didn't exist, our resulting intuitions about physics would not be significantly different. So we can cut out a lot of computational effort without losing practical benefit by not calculating quantum effects in the environments we use to create AIs.

You've probably heard of DeepMind creating an AI that can play any of ~50 Atari games extremely well, using a single general learning algorithm. Here the environments are simple so lots of data can be generated quickly, but the resulting skills don't carry over into other more complex environments. What happens as the game environments get more complex and more similar to our world, or as we specifically construct "games" to reward more general reasoning skills?

IMO, training AIs on data from rapid simulations of increasingly complex environments is a pretty plausible road to general AI which doesn't depend on humans needing to manually feed complex content to the AI (via writing code) or understand brain architecture. This approach works regardless of whether you or Yudkowsky are right about the importance of content vs. architecture.

Expand full comment

There are cases where people have hydrocephalus that reduces their brain size by 75% and their IQs are 'normal'. http://tinyurl.com/zpm2ybu

Expand full comment

But the genetic code has a different quality than human code that is written for e.g. maintenance and simplicity. This is well illustrated here:https://xkcd.com/1605/

Expand full comment

Programs are ultimately written in binary, which is only two characters, but as Quine noted is sufficient to express anything that can be written in English. However, in order to express all those things, one would need a certain number of those two characters. Are genes like characters or sentences (which are used in lines of English pseudo-code)? I'd say the base-pairs used to write genes are more analogous to characters. There are different conceptions of what a "gene" is, Dawkins' is an information-theoretic one closer to a sentence or line of code.

Expand full comment

I don't know about you, but my English is not infinitely repeating. An infinite string of the same letter, word, phrase, etc. has no more information than a single instance, except for the number of repetitions, which, as I mention in my other comment, can itself encode any amount of information -- think "base 1". And if you don't know what that means, then you don't know the basics needed to talk sensibly about information, which appears to be the case for most of the people commenting.

Expand full comment

The amount of information (which I am distinguishing from "output", because an infinite repeating output contains limited information) that can be generated is limited by the size of the code.

Isn't this contradicted by natural alphabetical languages like English, which can convey unlimited information with a finite code?

Expand full comment

More cluelessness. First, repeated output has no more information than a single instance ... except for the number of repetitions, and with no bound on that number, there is no bound on the amount extra information ... the number of repetitions could be a Godel numbering of any of an infinite number of messages. So even on that score you're wrong. Second, the amount of information in the code only limits the amount of information in the output if the code is *self-contained*. But code has *inputs* that are independent of it. A small piece of code that simply copies its input to its output will produce as much information in the output as there is in the input ... duh. That's why it is so absurdly inept for ignorant dolts like Hanson to blather about the "the number of genes" limiting the information in the brain. Brains *learn* (well, some do) ... they incorporate information from external sources. Sheesh.

The notion that the number of genes that produce a brain somehow limits the amount of content, information, architecture, or anything else in the brain so obviously and startlingly wrong that I can't fathom the sort of mistaken assumptions and beliefs, or the sort of thinking processes -- if any -- that could support it. It's like saying that the number of transistors in a CPU limits the size or complexity of the programs in the computer. Turing, Church and Post are all spinning in their graves.

Expand full comment

Well, I do think you're right that we'll have a brain emulation before we understand the human brain (although depending on where technological growth is at when that happens, the time between the first milestone and the second milestone might be very short). But I'd also say we know very little about the human brain at this point, which increases the likelihood that another avenue will produce human-like intelligence.

Expand full comment

1st law of thermodynamics

Everything has to have at some point come from outside the person. A person cannot create or destroy anything because that would break thermodynamics.

People are by necessity a product of their initial starting point (their genes) and their environmental influences as they develop (the external stimuli); the brain then must be a product of those two things. If we specify the environmental influences as reinforcement learning algorithms occurring while the program is running (while the person is learning from their environment), then that leaves just genes which would correspond to the lines of code; the initial starting point which created the organism.

Now, it may be very difficult to solve the problem in that way. We may need to specify a much more elaborate system than genes use if we don't fully understand how those environmental influences produce the product from the initial starting point, but a human-level intelligence program that has been perfectly optimized to take up as few lines of code as possible must be less than the size of the human genome.

Expand full comment

I find this reasoning pretty convincing, but I can still think of a lot of cases where the solution evolution produced is much more difficult for us to copy than a solution humans invented. Flapping wings, instead of fixed wing aircraft. Photosynthesis instead of solar panels. Fermentation or aerobic respiration and ATP instead of an internal combustion engine.

Maybe I'm cherry-picking my examples - I would think that I'm more likely to remember concise, powerful theories because they're so useful. For the problem at hand, I don't know whether to expect a concise, powerful theory of intelligence or not.

Expand full comment