14 Comments

It occurs to me that as software incrementally moves from its current state to full AI, it will move economic categories from capital to labor. Currently, software is production machinery, to be used instead of a calculator, a wind tunnel, a printing press, and so forth. A full AI would be like an em, and thus labor. You'll need to come up with a theory where there is a mixture, a shift, or a separate category.

Expand full comment

On challenge.

First, I agree with proposition that software is a form of weak AI, which augments human capabilities.

Second. Those deep learning approaches ( mentioned in comments ) are not fundamentally different from existing practices, though in some areas - software utilizing deep learning ( and other machine learning algorithms ) might outperform humans in many narrow fields. But still - like with current software - a master ( who will make overall decisions ) will be human. So here - we can expect - more effects from software, but not much different from outcomes we seen until now.

Then, the big difference will happen if a 'synthetically' thinking machine might be build ( emulating human brain ), then that machine will be capable to make quite different software in the sense - that it will 'fix' all human errors during development much faster, than humans can do, and it will lead to fundamentally different outcomes. That synthetic brain emulation might happen 10 years from now, or maybe 100 years from now.

Expand full comment

No, that is already written and is under review.

Expand full comment

Robin Hanson proposes to take three years to conduct a broad positive analysis of the multipolar scenario...

Does this delay your homo hypocritus book?

Expand full comment

Some potential avenues that differs slightly but meaningfully from your base case, as I understand it. (These are not all mutually exclusive, and many are borderline inside of your model-description, but are economic/software design related suggestions.)

1) Software systems, being built as amalgams of (imperfectly tested / designed) lower level programs, slows as a function of capability, and due to the fundamental (halting problem) limits of software verification, the current trajectory slows, asymptotically approaching an ability above, but not too far above, human capabilities. (Progress near / beyond human capabilities slows.)

2) Software complexity continues to grow rapidly, but as the rate of errors increases, predictability of errors declines significantly, making the use of these systems economically / legally viable only in some areas. (We already see this happening to some extent.)

3) Software development in the realm of learning begins to depend to a greater extent on the ability of software to train itself; (See the recent "Learning to learn by gradient descent by gradient descent": https://arxiv.org/abs/1606.... )

4a) The way in which machine learning and similar techniques are trained is never generalized, and depends to a greater and greater extent on training data. The cost of training, and accuracy of an algorithm for a specific task type, is a function of time spent generating and manually classifying data. (Assume the accuracy is, say, ~log(training set size), and the cost of extending training sets is linear.) If software is needed per-task, this changes the economics of using AIs in different areas.

4b) Generalized AI with human abilities is possible with generalized training data, the cost of which is proportionate to a large multiple of the cost to, say, train a human from age 0-18. This training cannot easily be replicated for diverse abilities without specific training, much like humans, but requires much more effort to build the training.

Expand full comment

Open-source is highly visible, but hard to monetize.

Agree that Microsoft-style single perpetual licences for closed-source software seems to be a fragile and fading business model.

A *lot* of AI is and will remain closed-source and be embedded in a hardware device (car or other robot) or sold as a service. This is a strong way to pay for private software development.

Currently the trend is for AI to be neural-net based, where behavior is trained over huge data-sets semi-randomly, rather than explicitly specified. Neural nets are *difficult* to test adequately.

I expect to see a lot of software being tested as a black box, by its behavior rather than by inspection. It will gradually get harder and harder to reason about (by humans at least).

Expand full comment

oops, TensorNet -> TensorFlow

Expand full comment

Re: your challenge-

I don't know a lot about this but guess Yes, if only because of what I've seen concerning neural nets recently, including yesterday's ACM webinar by Jeff Dean: "Large-Scale Deep Learning with TensorFlow for Building Intelligent Systems". It seems that Google at least is getting much traction with the most recent generation of NN-based systems.

Their approach is to massively scale up model size and training database size, and to still get quick turnaround on experiments (minutes/hours) by leveraging a distributed sw arch, lots of servers, and purpose built ASICs. Dean lists a diverse set of apps which use this work. A graph of the number of directories (basically, projects) containing TensorNet models within Google is rocketing up in the last year or so.

Colorful aside which I may have misunderstood - I *think* he said that TensorFlow was able to "recognize" a python interpreter into existence, by being trained on a large number of input/output pairs.

Expand full comment

If the board of this endowment asked me who in the world is the best choice for conducting a study like this, I would immediately think of you. I suspect this kind of research has a very low probability of really hitting the mark - too many moving pieces and blackswannish surprises - but you will certainly improve the quality of the discourse, and that's more bang for the buck than most social science research grants. Congratulations!

Expand full comment

I'm amazed they're stupid enough to give you money and pretend it's charity. Good for you though; enjoy!

Expand full comment

Congratulations, Robin. That's great news. I'm glad to hear that you'll have time to focus on the AI discussion, and bring an economists' perspective. There's clearly a sizable faction that disagrees with your (our) viewpoint on slow, widespread, mostly public development, but they'll have to sharpen their arguments once you carefully describe the assumptions and observations that lead to these views.

Expand full comment

You should consider interviewing/working with someone who is more up to date with current software engineering practices. Can't remember where but I remember reading one of your software related posts that made assumptions that seemed at least a decade out of date to me. At the very least make a list of key assumptions re: software development economics and post them to your blog so commenters can try to invalidate them.

Expand full comment

(I'm a CS grad student, working more on the theoretical than the applied side of things, but I have worked in industry.)

Past trends:

Hardware was very expensive; Moore's law was in full effect for serial processor speed. It usually made sense to trade-off programmer time for better performance. It was also acceptable to write code that was inherently serial, and for subsystems to be tightly coupled with each other.

The mythical "Real Programmer" takes this to a logical extreme, creating systems that are difficult for anyone else to understand or modify but offer very high performance on limited hardware.

Software is often (usually?) monetized by selling a license for perpetual use for a fixed price.

Current trend:

Moore's law is still partially in effect, in that you can still buy more FLOPS per dollar ever year, but serially processor speeds are stalling; you get more processing power by buying more processors. Computing power is cheap, relative to the cost of hiring programmers.

Inherently serial code and tightly coupled subsystems are discouraged more than before.

Open source software becomes much more common and developing it is much easier because of the internet. Many (most?) tools, used by programmers have to be released as open source software or fade into obscurity.

Software is monetized in a few different ways:

- Open source, but sell support services (the "RedHat model")

- As a service, access via internet; money made through either a recurring subscription fee, ads, or collecting/monetizing customer data (Google does both of the last two by targeting ads based on user behavior)

Single, perpetual licenses still exist, although they seem less popular. I don't have good data, but outside of things like videogames and very specialized software, I don't think most new software is sold this way anymore.

Speculating on the future:

Demand for parallelizable code will increase.

Demand for high-reliability software will increase. This will encourage use of more powerful formal methods/software verification tools and languages that support formal verification (e.g. sub-Turing complete programming languages and more sophisticated type systems)

Related: We'll see more domain-specific languages; the easiest way to ensure reliability for a particular type of task is to make entire classes of errors impossible to express.

Informally, programming starts to look more like mathematics.

Even more speculative:

If institutions are developed for supporting Tabarrok-style dominant assurance contracts, we'll start seeing software development supported this way.

Expand full comment

Well, congratulations.

Expand full comment