How Different AGI Software?

My ex-co-blogger Eliezer Yudkowsky recently made a Facebook post saying that recent AI Go progress confirmed his predictions from our foom debate. He and I then discussed this there, and I thought I’d summarize my resulting point of view here.

Today an individual firm can often innovate well in one of its products via a small team that keeps its work secret and shares little with other competing teams. Such innovations can be lumpy in the sense that gain relative to effort varies over a wide range, and a single innovation can sometimes make a big difference to product value.

However big lumps are rare; typically most value gained is via many small lumps rather than a few big ones. Most innovation comes from detailed practice, rather than targeted research, and abstract theory contributes only a small fraction. Innovations vary in their generality, and this contributes to the variation in innovation lumpiness. For example, a better washing machine can better wash many kinds of clothes.

If instead of looking at individual firms we look at nations as a whole, the picture changes because a nation is an aggregation of activities across a great many firm teams. While one firm can do well with a secret innovation team that doesn’t share, a big nation would hurt itself a lot by closing its borders to stop sharing with other nations. Single innovations make a much smaller difference to nations as a whole then they do to individual products. So nations grow much more steadily than do firms.

All of these patterns apply not just to products in general, but also to the subcategory of software. While some of our most general innovations may be in software, most software innovation is still made of many small lumps. Software that is broadly capable, such as a tool-filled operating system, is created by much larger teams, and particular innovations make less of a difference to its overall performance. Most software is created via tools that are shared with many other teams of software developers.

From an economic point of view, a near-human-level “artificial general intelligence” (AGI) would be a software system with a near-human level competence across almost the entire range of mental tasks that matter to an economy. This is a wide range, much more like scope of abilities found in a nation than those found in a firm. In contrast, an AI Go program has a far more limited range of abilities, more like those found in typical software products. So even if the recent Go program was made by a small team and embodies lumpy performance gains, it is not obviously a significant outlier relative to the usual pattern in software.

It seems to me that the key claim made by Eliezer Yudkowsky, and others who predict a local foom scenario, is that our experience in both ordinary products in general and software in particular is misleading regarding the type of software that will eventually contribute most to the first human-level AGI. In products and software, we have observed a certain joint distribution over innovation scope, cost, value, team size, and team sharing. And if that were also the distribution behind the first human-level AGI software, then we should predict that it will be made via a great many people in a great many teams, probably across a great many firms, with lots of sharing across this wide scope. No one team or firm would be very far in advance of the others.

However, the key local foom claim is that there is some way for small teams that share little to produce innovations with far more generality and lumpiness than these previous distributions suggests, perhaps due to being based more on math and basic theory. This would increase the chances that a small team could create a program that grabs a big fraction of world income, and keeps that advantage for an important length of time.

Presumably the basis for this claim is that some people think they see a different distribution among some subset of AI software, perhaps including machine learning software. I don’t see it yet, but the obvious way for them to convince skeptics like me is to create and analyze a formal dataset of software projects and innovations. Show us a significantly-deviating subset of AI programs with more economic scope, generality, and lumpiness in gains. Statistics from such an analysis could let us numerically estimate the chances of a single small team encompassing a big fraction of AGI software power and value.

That is, we might estimate the chances of local foom. Which I’ve said isn’t zero; I’ve instead just suggested that foom has gained too much attention relative to its importance.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • Alyssa Vance

    “While one firm can do well with a secret innovation team that doesn’t share, a big nation would hurt itself a lot by closing its borders to stop sharing with other nations.”

    Can “sharing” be defined more precisely? Nations have secret innovation teams that develop closed technology all the time. The US is pretty open by historical standards, but eg. details of the Teller-Ulam design are largely still secret, sixty years after its invention.

    • http://overcomingbias.com RobinHanson

      You are talking about sharing on particular projects, near the scope of typical firm research project. Even if those projects are done by government agencies, they are small projects relative to the nation as a whole. I’m talking about sharing at the scope of the nation as a whole, not at the scope of one or a few projects.

      • Alyssa Vance

        It seems very likely, though, that the number of people employed in producing AI (across the entire economy) will be closer to firm-scale than nation-scale. AI developers (and to a lesser extent programmers generally) have an exponential productivity distribution, like mathematicians or actors or basketball players, where the top few people have enormous productivity and the productivity of the median person is zero or negative. This will tend to concentrate production inside a small subpopulation, and suggests a Saudi Arabia-like economy where oil (= AI) is the main industry that produces gobs of cash with small numbers of employees, the non-oil work is done by migrant workers (= robots), and most citizens just don’t do very much economically.

      • http://overcomingbias.com RobinHanson

        I want to see concrete statistics to support this claim of a different distribution of productivity.

      • Alyssa Vance

        Sure. Here you go:

        http://programmers.stackexchange.com/questions/179616/a-good-programmer-can-be-as-10x-times-more-productive-than-a-mediocre-one

        http://www.construx.com/10x_Software_Development/Origins_of_10X_%E2%80%93_How_Valid_is_the_Underlying_Research_/

        I’m not aware of any studies on AI programmers specifically, as AI is still so tiny in dollar terms, and what size it does have is very recent. But if you think it would be less or equally skewed compared to developers in general, I suspect a lot of people would be willing to take that bet 🙂

      • Alyssa Vance

        Note that skewedness for AI programmers vs. all programmers should adjust for selection bias. If, eg., the AI industry became as large as the current software industry, and 90% of current programmers had zero or negative productivity in AI and exited the field, this increased skewness wouldn’t show up on a study comparing the best AI programmer to the worst.

      • Alyssa Vance

        One cheap way would be to look at profit-per-employee in the largest companies in the industry. At Google this is around $300,000, which is likely an understatement due to heavy investment in blue-sky research, while at GE it’s $30,000. Unfortunately, I’m not sure if there are enough mature AI companies to measure it for AI properly.

      • http://overcomingbias.com RobinHanson

        Showing data about only one area can’t show that this area is different from other areas. And I’m saying that one claim a difference of AGI software from other software if one is to expect different results than we already see in software.

      • lump1

        I don’t follow why you think this AI-making could be a steady-as-oil income
        stream. Let’s say that AlphaGo didn’t play go, but instead did something
        commercially valuable: It designed brilliant, attractive and efficient
        residential houses. Many people would want to buy AlphaGo (architects
        are expensive and slow), but how much should they pay? Not more than the
        cost of creating their own AlphaGo in-house. Given how few resources
        Google put in, that roll-your-own cost would not be very high. Thus
        AlphaGo’s market price would need to be much lower than even that, to
        dissuade competitor AIs from moving in on their business.

        If the real AlphaGo is significant, it’s because it demonstrates that superhuman AI can be cobbled together in a garage from open code plus some spit-and-shine. That’s hardly the kind of product on which to base a cartel, no matter how valuable its functions are.

    • IMASBA

      “Can “sharing” be defined more precisely? Nations have secret innovation teams that develop closed technology all the time. The US is pretty open by historical standards, but eg. details of the Teller-Ulam design are largely still secret, sixty years after its invention.”

      My thoughts exactly: for foom you don’t need a “small” team making the breakthrough. It could be a huge multinational firm or a Manhattan Project-like effort by a superpower nation (or an alliance of powerful nations). Like Robin I don’t think the probability of foom is very high since there will always be multiple huge multinational firms and multiple superpower nations and the economic and strategic benefits of AGI are too big for only one of them to be working on it. But yeah, the possibility of foom coming about by something that’s bigger than a small team should be included in the total probability of foom happening.

  • eyes_in_the_sky

    I understand DeepMind took largely the same approach to winning at Go and beating Atari games. Do you expect their insights to be lumpy enough to generalize to all board games? To all board games and computer games? To any real-world competitive endeavors that aren’t games?

    • http://overcomingbias.com RobinHanson

      Even between Go and Atari, they were related approaches, not the same approach.

      • lump1

        What impresses people is how little effort this retooling took, given the superhuman quality of the results. FOOM believers compulsively project trendlines over novel collections of data points. Doing that here is leading them to believe a “new” superhuman AI for a specific problem is gradually requiring less human labor to make. Down the trendline, they see a time when the human labor input necessary to make a new specific AI will be zero. This means that past that point, specific superhuman AIs are free from the perspective of human labor. The speed at which they construct themselves will depend on the CPU cycles that are allocated to the the task.

        But this is not quite enough for AGI, much less FOOM. Bundling a bunch of specific task-daemons into single system (one that can play go, drive cars, write articles about lacrosse games, etc, all at a superhuman level) does not make that thing an AGI. But it would still be a system that would have a big impact on the world. The following link overstates matters, but maybe there is a kernel of insight there.

        http://globalguerrillas.typepad.com/globalguerrillas/2016/03/game-on-the-end-of-the-old-economic-system-is-in-sight.html

      • Dan Browne

        Right. No way can this approach generate a path to FOOM. I’d quibble, though that it can’t generate AGI. In fact I’d make the case that it already has in principle. AGI IMHO is the capability to take perceptual inputs of your surroundings and generate from just those inputs a model with valid predictive capability. As distinct from self-awareness and consciousness which are *different* than intelligence.

  • turchin

    I think that trade is going on the levels of key insights, but not on the level of modules. If we look on nuclear history, the key insight was that nuclear weapon are possible. Some other insights were stollen by Soviets, including actual pieces of plutonium and the very important idea that for plutonium bomb only implosion will work. Key insights could be just several words and they disseminate very quick. Spying is also form of trading. In case of ML the main idea may be that convolution and recurrent neural nets are really strong. And they could be combine with other search methods.

  • zarzuelazen

    Robin, the few key insights needed for AGI are already out there for those with the wit to see them 😉
    Full-reflective reasoning is entirely captured in a hierarchy consisting of a mere 3-levels of abstraction. AGI will indeed just be a generalized version of the 3-level architecture already on display in AlphaGo.
    Level 1: Evaluation network – evaluates ‘position’ of the world
    Level 2: Policy network – selects ‘moves’ (actions) in the world
    Level 3: Planning – world model (simulation) of possibilities
    Levels 1 and 2 already used very general-purpose methods (deep learning) in AlpahGo, although these methods are very inefficient and data-intensive – deep learning will likely be superceded by Bayesian program learning (BPL), but the general principle is the same (statistical pattern recognition).
    Level 3 used Monte-Carlo-Tree-Search (MCST) in Alpha-Go, which is *somewhat* general, although the GO-world only needs a very simple world-model. This method needs substantial generalization for AGI, but again, the general principle is clear (world-models – simulations – that search the space of possible outcomes)
    I’m calling it for FOOM.

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      Why haven’t published this in a more “high-status” format?

    • Dan Browne

      Uh, no. You presented the correct evidence of what has actually happened and I agree that we are mere inches away from AGI but you did *not* present any valid extrapolation as to why this particular implementation will FOOM. FOOM is where a piece of software examines it’s own code, optimizes it and then reboots. Then it does the same thing again. This version of AGI is patently *not like that*. It clearly, as you say, has the necessary components of intelligence – i.e. it can self reflect on external inputs and come up with a model with predictive ability but I’m in no way convinced that the system itself understands *or is even capable of understanding* how it itself works. Which is a fundamental and basic necessity for FOOM to take place. We might see a different version of Robin’s EM world but with copies of neural net AIs inhabiting cyberspace instead of human EMs….

      • zarzuelazen

        Read what I actually said. I was discussing the general architectual principles, not AlphaGO specifically.

        Obviously AlphaGo can’t FOOM because it’s modelling capability is limited to the world of GO, but see my definition of level 3 above:

        “Level 3: Planning – world model (simulation) of possibilities”

        For FOOM, just take out the model of the GO-world and replace that with models of the worlds of software and mathematics, then get the program to search for the optimal version of itself inside the model.

  • Grognor

    Something I don’t understand about your perspective is equating power with economic productivity. An agent can be capable of taking over the world without being able to manufacture so much as a widget, or conversely have a high GDP without being able to win a fight against a small army.

  • Brian Slesinsky

    It seems like these scenarios aren’t all that distinct.

    Sure, general intelligence requires competence at a wide range of tasks. But many of these tasks are already being tackled using narrow AI. Once many of the solutions are widely available and understood, why is it unlikely that a single ambitious team might be the one to put it all together?

    The history of science and technology shows lots of important inventions happening locally, so it seems pretty plausible (availability bias here) that someone will come up with the key ideas first and take most of the rewards.

    But it’s often the case that other teams will not be far behind, and since they’re building from similar components, they are eventually able to replicate this success.

    I’m not sure if that counts as local or not. There’s plenty of sharing going on, but the key innovation may very well happen in one place first in a surprising way.

    • http://overcomingbias.com RobinHanson

      But why would one team that “put it all together” be so much further ahead of other teams that do the same?

      • Brian Slesinsky

        Why not? It seems like it happens fairly often. (Google getting way ahead on search, for example, though the gap has since gotten narrower.)

        Sometimes this is random (a non-obvious solution that someone stumbles across first). Sometimes there are increasing returns: making progress on an interesting problem attracts funding.

        Some advances in technology are more easily replicated than others. This might depend on how secretive the people who created it are about their invention. I don’t think we can say in advance which it will be.

      • Dan Browne

        Especially when they trained it rather than coded it.

  • zarzuelazen

    Lee Sedol wins the 4th game against AlphaGo!

  • https://entirelyuseless.wordpress.com/ entirelyuseless

    Eliezer’s FB post is an obvious attempt to promote his philosophical theories about AI, even though the facts give no support to them. He was consequently embarrassed by AlphaGo’s loss in the fourth round and asserted, “That doesn’t mean AlphaGo is only slightly above Lee Sedol, though. It probably means it’s “superhuman with bugs”.
    Nonsense. It means precisely that AlphaGo is currently somewhat better than Lee Sedol, but not a lot better. “Bugs” would mean the algorithm doesn’t do what it was intended to do, and that’s very unlikely. Rather, Lee Sedol simply played better in that game, got into a winning position, and then we saw that this kind of algorithm does not do very well in a losing position.
    But this doesn’t support Eliezer’s narrative, and so he is incapable of recognizing the facts.

    • zarzuelazen

      AlphaGo hasn’t been improving as fast as some thought. Since last October, it’s been improving slowly and steadily, but it certainly didn’t woosh off into a FOOM. Currently, as you say, it’s probably somewhat stronger than Sedol, but not vastly better (may be it’s a matter of 100 or 200 Elo points better).

      Demis Hassabis had the data (the probabilities that AlphaGo gave for winning at each point in the game). What happened in the first 3 games, is that the matches were relatively even in the early stages (opening) and AlphaGo was only starting to pull away in the latter stages (middle and end game).

      So it appear that the MCTS (Monte-Carlo Tree Search) has some trouble with the opening game. Sedol is obviously aware of this, because in the games you will notice he uses up all his time thinking hard about the opening moves, to try to get an advantage there.

      Also MCTS makes ‘slack moves’ when it’s ahead, and sometimes makes ‘crazy’ moves when it’s behind (it tries to maxmize the probability of winning rather than the margin of victory, and it misses surprising variations).

      We saw in the 4th game how Sedol was finally able to take advantage of these weaknesses. He used up lots of time thinking about the opening moves to execute a long-range plan, and then found a surprising variation missed by AlphaGo that put him in a winning position. He then closed out the end-game as AlphaGo made some poor moves.

      Here’s a fascinating account of Sedol’s strategy to beat the AlphaGo evaluation system in that 4th game:

      Lee Sedol defeats AlphaGo in masterful comeback – Game 4

      “The game plan they came up appeared to be to try a type of ‘amashi strategy’, which is among the more extreme styles of play (but is still a valid approach to the game).”

      “AlphaGo seems to be able to manage risk more precisely than humans can, and is completely happy to accept losses as long as its probability of winning remains favorable.

      The Japanese have a name for this style of play, as it closely resembles the prevalent style of Japanese professionals over the previous few decades.

      They call it ‘souba‘ Go, which means something like ‘market price’.”

      ….

      “As John Fairbairn has pithily put it, it’s like trying to win by arbitrage.”

      “Of course, stock traders don’t stand a chance of beating modern trading algorithms at their own game, so we shouldn’t expect Go players to do so either.”

      “What Lee and his friends had realized, was that they needed to completely upend the market.”

      • Brian Slesinsky

        This steady improvement is quite fast compared to how research normally goes. They’ve also said that they don’t see a limit yet on how much AlphaGo can improve.

        It seems this tournament was well-timed to be competitive, but slightly in AlphaGo’s favor. If they’d waited a few months, the results might have been more lopsided.

  • Peter David Jones

    Yudkowsky seems by enamoured of general probem solvers such as AIXI.
    That might be why he places a higher estimate on basement hackers coming up with super intelligence than Hanson: he sees general intelligence as a matter of coming up with a general problem solver that doesn’t have AIXI’s uncomputability problem, and not as pasting together a lot if separate technologies.

    • http://overcomingbias.com RobinHanson

      Yes that has been my rough summary; it is about the relative power of a few key architectural insights vs lots of detailed “content”.

      • zarzuelazen

        An AGI can generate all the detailed content it needs in a very short-time (in humans terms) via the huge volume of information already publically avaliable on the net, combined with *intelligent* (approximate) simulation (which doesn’t suffer from a combinatorial explosion).

        You can see this with AlphaGo. It started with a database of 30 million GO games, then it started playing millions of games against itself – Monte Carlo tree search (basically simulation of the ‘GO world’) generated all the detailed content it needed.

        In just a few months it went from strong player l to world-champion level.

  • Dan Browne

    What I find interesting about Deepmind’s approach is it isn’t algorithmic. It should therefore not be susceptible to FOOM. Or at least not the “program programs the program to make a better program version”. It *might*, however, come up with something which we might more or less consider to be AGI. I suspect, though that a future conversation with it might go like this:
    Researcher: So tell me how you work so that you can tell me how to improve your software.
    AGI: No clue. I have no clue how I work. Do you know how your own brain works?
    Researcher: Well, uh, no. But you’re an AGI. You *must* know how you work?
    AGI: Why?

    In such a case, we would see only an increase in speed based on hardware speedups and we’re already close to quantum limits.
    Unless our putative AGI can invent entirely new engineering. Which I doubt.

  • Matthew Hammer

    It’s interesting that the challenge to beat the champion Arimaa player with a computer was also passed last December, though there doesn’t seem to be any necessary relation, other than perhaps with increasing computer power.
    What does make the Arimaa challenge different is that there are papers about what insights went into the winning programs (and many others).
    This Paper describes the changes that went into the winning bot over the last year, and the proportion of the ~400 Elo points were gained from them (a couple at 80 a few at 35-25 and many small).