AI As Software Grant

While I’ve been part of grants before, and had research support, I’ve never had support for my futurist work, including the years I spent writing Age of Em. That now changes:

The Open Philanthropy Project awarded a grant of $264,525 over three years to Robin Hanson (Associate Professor of Economics, George Mason University) to analyze potential scenarios in the future development of artificial intelligence (AI). Professor Hanson plans to focus on scenarios in which AI is developed through the steady accumulation of individual pieces of software and leads to a “multipolar” outcome. .. This grant falls within our work on potential risks from advanced artificial intelligence, one of our focus areas within global catastrophic risks. (more)

Who is Open Philanthropy? From their summary:

Good Ventures is a philanthropic foundation whose mission is to help humanity thrive. Good Ventures was created by Dustin Moskovitz (co-founder of Facebook and Asana) and Cari Tuna, who have pledged to give the majority of their wealth to charity. .. GiveWell is a nonprofit that finds outstanding giving opportunities and publishes the full details of its analysis to help donors decide where to give. .. The Open Philanthropy Project is a collaboration between Good Ventures and GiveWell in which we identify outstanding giving opportunities, make grants, follow the results, and publish our findings.

A key paragraph from my proposal:

Robin Hanson proposes to take three years to conduct a broad positive analysis of the multipolar scenario wherein AI results from relatively steady accumulation of software tools. That is, he proposes to assume that human level AI will result mainly from the continued accumulation of software tools and packages, with distributions of cost and value correlations similar to those seen so far in software practice, in an environment where no one actor dominates the process of creating or fielding such software. He will attempt a mostly positive analysis of the social consequences of these assumptions, both during and after a transition to a world dominated by AI. While this is hardly the universe of all desired analyses, it does seem to cover a non-trivial fraction of interesting cases.

I and they see value in such an analysis even if AI software ends up differing systematically from the software we’ve seen so far:

While we do not believe that the class of scenarios that Professor Hanson will be analyzing is necessarily the most likely way for future AI development to play out, we expect his research to contribute a significant amount of useful data collection and analysis that might be valuable to our thinking about AI more generally, as well as provide a model for other people to follow when performing similar analyses of other AI scenarios of interest.

My idea is to extract from our decades of experience with software a more detailed description of the basic economics of software production and use. To distinguish, as time allows, many different kinds of inputs to production, styles of production, parts of produced products, and types of uses. And then to sketch out different rough “production functions” appropriate to different cases. That is, to begin to translate basic software engineering insight into economics language.

The simple assumption that software doesn’t fundamentally change in the future is the baseline scenario, to be fed into standard economic models to see what happens when such a more richly described software sector slowly grows to take over the economy. But a richer more detailed description of software economics can also give people a vocabulary for describing their alternative hypotheses about how software will change. And then this analysis framework can be adjusted to explore such alternative hypotheses.

So right from the start I’d like to offer this challenge:

Do you believe that the software that will let machines eventually do pretty much all jobs better than humans (or ems) will differ in foreseeable systematic ways from the software we have seen in the last seventy years of software practice? If so, please express your difference hypothesis as clearly as possible in terminology that would be understandable and familiar to software engineers and/or economists.

I will try to stretch the economic descriptions of software that I develop in the direction of encompassing the most common such hypotheses I find.

GD Star Rating
loading...
Tagged as: , , ,
Trackback URL:
  • endril

    Well, congratulations.

  • Joshua Brulé

    (I’m a CS grad student, working more on the theoretical than the applied side of things, but I have worked in industry.)

    Past trends:

    Hardware was very expensive; Moore’s law was in full effect for serial processor speed. It usually made sense to trade-off programmer time for better performance. It was also acceptable to write code that was inherently serial, and for subsystems to be tightly coupled with each other.

    The mythical “Real Programmer” takes this to a logical extreme, creating systems that are difficult for anyone else to understand or modify but offer very high performance on limited hardware.

    Software is often (usually?) monetized by selling a license for perpetual use for a fixed price.

    Current trend:

    Moore’s law is still partially in effect, in that you can still buy more FLOPS per dollar ever year, but serially processor speeds are stalling; you get more processing power by buying more processors. Computing power is cheap, relative to the cost of hiring programmers.

    Inherently serial code and tightly coupled subsystems are discouraged more than before.

    Open source software becomes much more common and developing it is much easier because of the internet. Many (most?) tools, used by programmers have to be released as open source software or fade into obscurity.

    Software is monetized in a few different ways:

    – Open source, but sell support services (the “RedHat model”)

    – As a service, access via internet; money made through either a recurring subscription fee, ads, or collecting/monetizing customer data (Google does both of the last two by targeting ads based on user behavior)

    Single, perpetual licenses still exist, although they seem less popular. I don’t have good data, but outside of things like videogames and very specialized software, I don’t think most new software is sold this way anymore.

    Speculating on the future:

    Demand for parallelizable code will increase.

    Demand for high-reliability software will increase. This will encourage use of more powerful formal methods/software verification tools and languages that support formal verification (e.g. sub-Turing complete programming languages and more sophisticated type systems)

    Related: We’ll see more domain-specific languages; the easiest way to ensure reliability for a particular type of task is to make entire classes of errors impossible to express.

    Informally, programming starts to look more like mathematics.

    Even more speculative:

    If institutions are developed for supporting Tabarrok-style dominant assurance contracts, we’ll start seeing software development supported this way.

    • Charlie Davies

      Open-source is highly visible, but hard to monetize.

      Agree that Microsoft-style single perpetual licences for closed-source software seems to be a fragile and fading business model.

      A *lot* of AI is and will remain closed-source and be embedded in a hardware device (car or other robot) or sold as a service. This is a strong way to pay for private software development.

      Currently the trend is for AI to be neural-net based, where behavior is trained over huge data-sets semi-randomly, rather than explicitly specified. Neural nets are *difficult* to test adequately.

      I expect to see a lot of software being tested as a black box, by its behavior rather than by inspection. It will gradually get harder and harder to reason about (by humans at least).

  • http://www.sanger.dk Pepper

    You should consider interviewing/working with someone who is more up to date with current software engineering practices. Can’t remember where but I remember reading one of your software related posts that made assumptions that seemed at least a decade out of date to me. At the very least make a list of key assumptions re: software development economics and post them to your blog so commenters can try to invalidate them.

  • Chris Hibbert

    Congratulations, Robin. That’s great news. I’m glad to hear that you’ll have time to focus on the AI discussion, and bring an economists’ perspective. There’s clearly a sizable faction that disagrees with your (our) viewpoint on slow, widespread, mostly public development, but they’ll have to sharpen their arguments once you carefully describe the assumptions and observations that lead to these views.

  • Ralph

    I’m amazed they’re stupid enough to give you money and pretend it’s charity. Good for you though; enjoy!

  • lump1

    If the board of this endowment asked me who in the world is the best choice for conducting a study like this, I would immediately think of you. I suspect this kind of research has a very low probability of really hitting the mark – too many moving pieces and blackswannish surprises – but you will certainly improve the quality of the discourse, and that’s more bang for the buck than most social science research grants. Congratulations!

  • arch1

    Re: your challenge-

    I don’t know a lot about this but guess Yes, if only because of what I’ve seen concerning neural nets recently, including yesterday’s ACM webinar by Jeff Dean: “Large-Scale Deep Learning with TensorFlow for Building Intelligent Systems”. It seems that Google at least is getting much traction with the most recent generation of NN-based systems.

    Their approach is to massively scale up model size and training database size, and to still get quick turnaround on experiments (minutes/hours) by leveraging a distributed sw arch, lots of servers, and purpose built ASICs. Dean lists a diverse set of apps which use this work. A graph of the number of directories (basically, projects) containing TensorNet models within Google is rocketing up in the last year or so.

    Colorful aside which I may have misunderstood – I *think* he said that TensorFlow was able to “recognize” a python interpreter into existence, by being trained on a large number of input/output pairs.

    • arch1

      oops, TensorNet -> TensorFlow

  • davidmanheim

    Some potential avenues that differs slightly but meaningfully from your base case, as I understand it. (These are not all mutually exclusive, and many are borderline inside of your model-description, but are economic/software design related suggestions.)

    1) Software systems, being built as amalgams of (imperfectly tested / designed) lower level programs, slows as a function of capability, and due to the fundamental (halting problem) limits of software verification, the current trajectory slows, asymptotically approaching an ability above, but not too far above, human capabilities. (Progress near / beyond human capabilities slows.)

    2) Software complexity continues to grow rapidly, but as the rate of errors increases, predictability of errors declines significantly, making the use of these systems economically / legally viable only in some areas. (We already see this happening to some extent.)

    3) Software development in the realm of learning begins to depend to a greater extent on the ability of software to train itself; (See the recent “Learning to learn by gradient descent by gradient descent”: https://arxiv.org/abs/1606.04474 )

    4a) The way in which machine learning and similar techniques are trained is never generalized, and depends to a greater and greater extent on training data. The cost of training, and accuracy of an algorithm for a specific task type, is a function of time spent generating and manually classifying data. (Assume the accuracy is, say, ~log(training set size), and the cost of extending training sets is linear.) If software is needed per-task, this changes the economics of using AIs in different areas.

    4b) Generalized AI with human abilities is possible with generalized training data, the cost of which is proportionate to a large multiple of the cost to, say, train a human from age 0-18. This training cannot easily be replicated for diverse abilities without specific training, much like humans, but requires much more effort to build the training.

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    Robin Hanson proposes to take three years to conduct a broad positive analysis of the multipolar scenario…

    Does this delay your homo hypocritus book?

    • http://overcomingbias.com RobinHanson

      No, that is already written and is under review.

  • http://www.sim-ai.org/blog Sergey Kurdakov

    On challenge.

    First, I agree with proposition that software is a form of weak AI, which augments human capabilities.

    Second. Those deep learning approaches ( mentioned in comments ) are not fundamentally different from existing practices, though in some areas – software utilizing deep learning ( and other machine learning algorithms ) might outperform humans in many narrow fields. But still – like with current software – a master ( who will make overall decisions ) will be human. So here – we can expect – more effects from software, but not much different from outcomes we seen until now.

    Then, the big difference will happen if a ‘synthetically’ thinking machine might be build ( emulating human brain ), then that machine will be capable to make quite different software in the sense – that it will ‘fix’ all human errors during development much faster, than humans can do, and it will lead to fundamentally different outcomes. That synthetic brain emulation might happen 10 years from now, or maybe 100 years from now.

  • Marc Geddes

    This is just some reassurance for readers worried about unfriendly super-intelligence.

    I’m very confident that Bostrom’s ‘Orthogonality Thesis’ is false. Confidence level > 95% (less than 5% chance I’m wrong).

    In general terms: I think the ‘the paper-clip monster’ is based on the same class of fallacy as other famous extreme suggestions of philosophy such as: pascal’s wager/mugging, two-boxing, sleeping beauty/doomsday anthropic reasoning, simulation argument, etc. etc.

    In philosophy, the way to make a reputation seems to be to come up with some ridiculous conclusion such as the examples given above (preferably accompanied by a spectacular ‘thought experiment’), and then spend the next 40 years writing papers seriously arguing for and against, until finally, the philosophy community can declare that yes, the conclusion really *was* ridiculous after all.

    Basically, the fallacy is taking *one* particular model or aspect or something , mistaking the model for the whole reality, then pushing the model to a logical extreme for which it was never designed or intended (an ‘edge case’).

    In the case of the ‘paper-clip monster’, the flawed model is ‘Bayesian decision-theory’.

    In reality, I think that there are *three* complementary modes of reasoning/decision-making that are needed for AGI, and the relationship between these 3 modes is that they are *complementary* (they are on an *equal* footing).

    The 3 systems necessary for AGI are: an evaluation system, a decision-making system and a planning system. Lets call these (evaluation, policy and planning).

    My conjecture that these 3 systems can’t be independent in the way Bostrom thinks. My conjecture is that once you have any *2* specified, the 3rd one is in effect, automatically constrained to have only 1 valid solution. You can choose which pairing you want to specify. So you can have any of:

    (evaluation, policy) = planning

    (evaluation, planning) = policy

    (policy, planning) = evaluation

    Once you specify the 2 systems in the brackets (left-hand side of equation), my conjecture is that the remaining system (right-hand side of equation) is automatically constrained to have only 1
    solution that will work.

    If true, this falsifies the Orthogonality Thesis.

  • J Storrs Hall

    It occurs to me that as software incrementally moves from its current state to full AI, it will move economic categories from capital to labor. Currently, software is production machinery, to be used instead of a calculator, a wind tunnel, a printing press, and so forth. A full AI would be like an em, and thus labor. You’ll need to come up with a theory where there is a mixture, a shift, or a separate category.

  • Pingback: Overcoming Bias : World Basic Income

  • Pingback: August 2016 Newsletter - Machine Intelligence Research Institute