28 Comments

It's interesting that the challenge to beat the champion Arimaa player with a computer was also passed last December, though there doesn't seem to be any necessary relation, other than perhaps with increasing computer power. What does make the Arimaa challenge different is that there are papers about what insights went into the winning programs (and many others).This Paper describes the changes that went into the winning bot over the last year, and the proportion of the ~400 Elo points were gained from them (a couple at 80 a few at 35-25 and many small).

Expand full comment

Right. No way can this approach generate a path to FOOM. I'd quibble, though that it can't generate AGI. In fact I'd make the case that it already has in principle. AGI IMHO is the capability to take perceptual inputs of your surroundings and generate from just those inputs a model with valid predictive capability. As distinct from self-awareness and consciousness which are *different* than intelligence.

Expand full comment

Uh, no. You presented the correct evidence of what has actually happened and I agree that we are mere inches away from AGI but you did *not* present any valid extrapolation as to why this particular implementation will FOOM. FOOM is where a piece of software examines it's own code, optimizes it and then reboots. Then it does the same thing again. This version of AGI is patently *not like that*. It clearly, as you say, has the necessary components of intelligence - i.e. it can self reflect on external inputs and come up with a model with predictive ability but I'm in no way convinced that the system itself understands *or is even capable of understanding* how it itself works. Which is a fundamental and basic necessity for FOOM to take place. We might see a different version of Robin's EM world but with copies of neural net AIs inhabiting cyberspace instead of human EMs....

Expand full comment

Especially when they trained it rather than coded it.

Expand full comment

What I find interesting about Deepmind's approach is it isn't algorithmic. It should therefore not be susceptible to FOOM. Or at least not the "program programs the program to make a better program version". It *might*, however, come up with something which we might more or less consider to be AGI. I suspect, though that a future conversation with it might go like this:Researcher: So tell me how you work so that you can tell me how to improve your software.AGI: No clue. I have no clue how I work. Do you know how your own brain works?Researcher: Well, uh, no. But you're an AGI. You *must* know how you work?AGI: Why?

In such a case, we would see only an increase in speed based on hardware speedups and we're already close to quantum limits.Unless our putative AGI can invent entirely new engineering. Which I doubt.

Expand full comment

This steady improvement is quite fast compared to how research normally goes. They've also said that they don't see a limit yet on how much AlphaGo can improve.

It seems this tournament was well-timed to be competitive, but slightly in AlphaGo's favor. If they'd waited a few months, the results might have been more lopsided.

Expand full comment

Why not? It seems like it happens fairly often. (Google getting way ahead on search, for example, though the gap has since gotten narrower.)

Sometimes this is random (a non-obvious solution that someone stumbles across first). Sometimes there are increasing returns: making progress on an interesting problem attracts funding.

Some advances in technology are more easily replicated than others. This might depend on how secretive the people who created it are about their invention. I don't think we can say in advance which it will be.

Expand full comment

But why would one team that "put it all together" be so much further ahead of other teams that do the same?

Expand full comment

Yes that has been my rough summary; it is about the relative power of a few key architectural insights vs lots of detailed "content".

Expand full comment

Yudkowsky seems by enamoured of general probem solvers such as AIXI.That might be why he places a higher estimate on basement hackers coming up with super intelligence than Hanson: he sees general intelligence as a matter of coming up with a general problem solver that doesn't have AIXI's uncomputability problem, and not as pasting together a lot if separate technologies.

Expand full comment

Eliezer's FB post is an obvious attempt to promote his philosophical theories about AI, even though the facts give no support to them. He was consequently embarrassed by AlphaGo's loss in the fourth round and asserted, "That doesn't mean AlphaGo is only slightly above Lee Sedol, though. It probably means it's "superhuman with bugs". Nonsense. It means precisely that AlphaGo is currently somewhat better than Lee Sedol, but not a lot better. "Bugs" would mean the algorithm doesn't do what it was intended to do, and that's very unlikely. Rather, Lee Sedol simply played better in that game, got into a winning position, and then we saw that this kind of algorithm does not do very well in a losing position.But this doesn't support Eliezer's narrative, and so he is incapable of recognizing the facts.

Expand full comment

It seems like these scenarios aren't all that distinct.

Sure, general intelligence requires competence at a wide range of tasks. But many of these tasks are already being tackled using narrow AI. Once many of the solutions are widely available and understood, why is it unlikely that a single ambitious team might be the one to put it all together?

The history of science and technology shows lots of important inventions happening locally, so it seems pretty plausible (availability bias here) that someone will come up with the key ideas first and take most of the rewards.

But it's often the case that other teams will not be far behind, and since they're building from similar components, they are eventually able to replicate this success.

I'm not sure if that counts as local or not. There's plenty of sharing going on, but the key innovation may very well happen in one place first in a surprising way.

Expand full comment

Something I don't understand about your perspective is equating power with economic productivity. An agent can be capable of taking over the world without being able to manufacture so much as a widget, or conversely have a high GDP without being able to win a fight against a small army.

Expand full comment

I don't follow why you think this AI-making could be a steady-as-oil income stream. Let's say that AlphaGo didn't play go, but instead did something commercially valuable: It designed brilliant, attractive and efficient residential houses. Many people would want to buy AlphaGo (architects are expensive and slow), but how much should they pay? Not more than the cost of creating their own AlphaGo in-house. Given how few resources Google put in, that roll-your-own cost would not be very high. Thus AlphaGo's market price would need to be much lower than even that, to dissuade competitor AIs from moving in on their business.

If the real AlphaGo is significant, it's because it demonstrates that superhuman AI can be cobbled together in a garage from open code plus some spit-and-shine. That's hardly the kind of product on which to base a cartel, no matter how valuable its functions are.

Expand full comment

What impresses people is how little effort this retooling took, given the superhuman quality of the results. FOOM believers compulsively project trendlines over novel collections of data points. Doing that here is leading them to believe a "new" superhuman AI for a specific problem is gradually requiring less human labor to make. Down the trendline, they see a time when the human labor input necessary to make a new specific AI will be zero. This means that past that point, specific superhuman AIs are free from the perspective of human labor. The speed at which they construct themselves will depend on the CPU cycles that are allocated to the the task.

But this is not quite enough for AGI, much less FOOM. Bundling a bunch of specific task-daemons into single system (one that can play go, drive cars, write articles about lacrosse games, etc, all at a superhuman level) does not make that thing an AGI. But it would still be a system that would have a big impact on the world. The following link overstates matters, but maybe there is a kernel of insight there.

http://globalguerrillas.typ...

Expand full comment

Why haven't published this in a more "high-status" format?

Expand full comment