Tag Archives: AI

No Third AI Way

A few days ago in the Post:

Bryan Johnson .. wants to .. find a way to supercharge the human brain so that we can keep up with the machines. .. His science-fiction-meets-science start-up, Kernel, is building a tiny chip that can be implanted in the brain. .. Top neuroscientists who are building the chip .. hope that in the longer term, it will be able to boost intelligence, memory and other cognitive tasks. .. In an age of AI, he insists that boosting the capacity of our brains is itself an urgent public concern.

In a MeaningOfLife.tv video discussion between James Hughes and I just posted today, Hughes said:

One of the reasons why I’m skeptical about the [em] scenario that you’ve outlined, is that I see a scenario where brains extending themselves though AI and computing tools basically slaved to the core personal identity of meat brains is a more likely scenario than one where we happily acknowledge the rights and autonomy of virtual persons. .. We need to have the kind of AI in our brain which is not just humans 1.0 that get shuffled off to the farm while the actual virtual workers do all the work, as you have imagined.

Many hope for a “third way” alternative to both ems and more standard AI software taking all the jobs. They hope that instead “we” can keep our jobs via new chips “in” or closely integrated with our brain. This seems to me mostly a false hope.

Yes of course if we have a strong enough global political coordination we could stake out a set of officially human jobs and forbid machines from doing them, no matter how much better machines might be at them. But if we don’t have such strong coordination, then the key question is whether there is an important set of jobs or tasks where ordinary human brains are more productive than artificial hardware. Having that hardware be located in server racks in distant data centers, versus in chips implanted in human brains, seems mostly irrelevant to this.

If artificial hardware can be similarly effective at such tasks, then it can have enormous economic advantages relative to human brains. Even today, the quantity of artificial hardware can be increased very rapidly in factories. And eventually, artificial hardware can be run at much faster speeds, with using much less energy. Humans, in contrast, grow very slowly, have limited brain speeds, and are fragile and expensive. It is very hard to see humans outcompeting artificial hardware at such tasks unless the artificial hardware is just very bad at such tasks. That is in fact the case today, but it would not at all be the case with ems, nor with other AI with similar general mental abilities.

GD Star Rating
loading...
Tagged as: ,

No Short Em Age

The basic premise of my book is that the next big revolution on the scale of the farming and industrial revolutions will come from human level artificial intelligence in the form of brain emulations (ems). Yes, because people have asked I’ve estimated that this will happen within roughly a century, but that estimate isn’t central. The key is that even if ems take many centuries, they will still come before achieving human level artificial intelligence via the usual methods (UAI – via hand-coded algorithms including statistics), and before other social disruptions of this magnitude.

I’ve argued that this premise is plausible because it is hard to imagine social disruptions as big as AI, and because at past rates of progress UAI should take centuries, while ems look like they’ll be ready sooner. Yes, some people are so impressed by recent UAI demos that they think this time is different, so that we will now see an unprecedented burst of progress all the way to full UAI within a decade or two. But I remember past demos that were similarly impressive relative to then-current abilities.

Some people think the basic premise of my book is too weird, while others see it as not weird enough. This post addresses the most common objection I’ve heard from this second group: that even if ems come first, the usual AI will appear a few hours later, making the age of em too short to be worth much consideration.

Now there is certainly one way big ems make full UAI come faster: by speeding up overall economic growth. I’ve suggested the em economy might double every month or faster, and while some doubt this, few who think my book not weird enough are among them.

Since the economy mainly grows today via innovation, our ladder of growth is basically a ladder of overall innovation. We only double the economy when we have on averaged doubled our abilities across all economic sectors. So if the relative rates of economic growth and innovation in different sectors stay the same, then speeding up economic growth means speeding up the rate of progress toward full UAI. (While some expect a larger economy to innovate faster because it has more resources, the steady economic growth rates we’ve seen suggest there are contrary forces, such as picking the low hanging fruit of research first.)

For example, at past rates of UAI progress it should take two to four centuries to reach human level abilities in the typical UAI subfield, and thus even longer in most subfields. Since the world economy now doubles roughly every fifteen years, that comes to twenty doublings in three centuries. If ems show up halfway from now to full human level usual AI, there’d still be ten economic doublings to go, which would then take ten months if the economy doubled monthly. Which is definitely faster UAI progress.

However, ten doublings of the economy can encompass a whole era worthy of study. I’ve argued that ems would typically run fast enough to fit a subjective career of a century or more within an economic doubling time, so that their early career training can remain relevant over a whole career. So ten doublings is at least ten subjective centuries, which is plenty of time for lots of cultural and social change. A whole age of change, in fact.

Some argue that the existence of ems would speed up innovation in general, because ems are smarter and innovation benefits more from smarts than does typical production. But even if true, this doesn’t change the relative rate of innovation in UAI relative to other areas.

Some argue that ems speed up UAI progress in particular, via being able to inspect brain circuits in detail and experiment with variations. But as it can be very hard to learn how to code just from inspecting object spaghetti code from other coders, I’m skeptical that this effect could speed up progress anything like a factor of two, which would be where two (logarithmic) steps on the UAI ladder of progress are now jumped when single steps are on average jumped elsewhere. And even then there’d still be at least five economic doublings in the em era, giving at least five subjective centuries of cultural change.

And we know of substantial contrary effects. First, UAI progress seems driven in part by computer hardware progress, which looks like it will be slower in the coming decades than it has in past decades, relative to other areas of innovation. More important, a big part of em era growth can be due to raw physical growth in production, via making many more ems. If half of em economic growth is due to this process then the em economy makes two (logarithmic) steps of economic growth for every step on the ladder of innovation progress, turning ten ladder steps into twenty doublings. A long age of em.

Some argue that the availability of ems will greatly speed the rate of UAI innovation relative to other rates of innovation. They say things like:

When ems are cheap, you could have a million top (e.g., 100 times average) quality UAI research ems each running at a million times human speed. Since until now we’ve only had a thousand average quality UAI researchers at any one time, UAI progress could be a hundred billion times faster, making what would have taken three centuries now take a tenth of a second. The prize of getting to full UAI first would induce this investment.

There are just so many things wrong with this statement.

First, even if human speed ems are cheap, mega-ems cost at least a million times as much. A million mega-ems are as productive as trillion humans, times whatever factor by which the typical human-speed em is more productive than a typical human. The em economy would have to have grown a whole lot before it is even possible to devote that level of resources to UAI research. So there can be a whole em era before that point.

Second, this same approach seems equally able to speed up progress in any innovation area that isn’t strongly limited by physical process rates. Areas that only moderately depend on physical rates can spend more to compensate, so that their innovation rates slow only modestly. If only a modest fraction of innovation areas were substantially limited by physical rates, that would only speed up UAI progress by a modest factor relative to overall economic growth.

Third, just because some researchers publish many more academic papers than others doesn’t at all mean that young copies of those researchers assigned to other research areas would have published similarly. Ex ante expected researcher quality varies a lot less than ex post observed research publications. Yes, people often vary by larger factors in their ability to do pure math, relative to other abilities, but pure math contributes only a small fraction to overall innovation.

Fourth, it is well known that most innovation doesn’t come from formal research, and that innovations in different areas help each other. Economists have strong general reasons to expect diminishing returns to useful innovation from adding more researchers. Yes, if you double the number of researchers in one area you’ll probably get twice as many research papers in that area, but that is very different from twice as getting much useful progress.

As I mention in my book, in some cases we’ve actually measured how research progress varies with the number of researchers, and it looks more like a square root dependence. In addition, if innovation rates were linear in the number of formal researchers, then given the tiny fraction of such researchers today we’d have to be vastly underinvesting in them, and so nations who invest more in formal research should expect to see much higher rates of economic growth. Yet we don’t actually see much of a relation between economic growth and spending on formal research. (Yes studies vary, so there could be a modest, but not a huge, effect.)

So, in sum, we should expect that useful UAI innovation doesn’t mostly come from formal research, and so doubling the number of UAI researchers, or doubling their speed, doesn’t remotely double useful innovation. We aren’t vastly underinvesting in formal research, and so future parties can’t expect to achieve huge gains by making a huge new investment there. We can expect to see modest gain in UAI innovation, relative to today and to other innovation areas, from an ability to inspect and experiment with ems, and from not being very limited by physical process rates. But these give less than a factor of two, and we should see a factor of two in the other direction from slowing hardware gains and from innovation mattering less for economic growth.

Thus we should expect many doublings of the em era after ems and before human level UAI, resulting in many centuries of subjective cultural change for typical ems. Giving an em era that is long enough to be worth considering. If you want to study whatever comes after the em era, understanding the em era should help.

GD Star Rating
loading...
Tagged as: , ,

AI As Software Grant

While I’ve been part of grants before, and had research support, I’ve never had support for my futurist work, including the years I spent writing Age of Em. That now changes:

The Open Philanthropy Project awarded a grant of $264,525 over three years to Robin Hanson (Associate Professor of Economics, George Mason University) to analyze potential scenarios in the future development of artificial intelligence (AI). Professor Hanson plans to focus on scenarios in which AI is developed through the steady accumulation of individual pieces of software and leads to a “multipolar” outcome. .. This grant falls within our work on potential risks from advanced artificial intelligence, one of our focus areas within global catastrophic risks. (more)

Who is Open Philanthropy? From their summary:

Good Ventures is a philanthropic foundation whose mission is to help humanity thrive. Good Ventures was created by Dustin Moskovitz (co-founder of Facebook and Asana) and Cari Tuna, who have pledged to give the majority of their wealth to charity. .. GiveWell is a nonprofit that finds outstanding giving opportunities and publishes the full details of its analysis to help donors decide where to give. .. The Open Philanthropy Project is a collaboration between Good Ventures and GiveWell in which we identify outstanding giving opportunities, make grants, follow the results, and publish our findings.

A key paragraph from my proposal:

Robin Hanson proposes to take three years to conduct a broad positive analysis of the multipolar scenario wherein AI results from relatively steady accumulation of software tools. That is, he proposes to assume that human level AI will result mainly from the continued accumulation of software tools and packages, with distributions of cost and value correlations similar to those seen so far in software practice, in an environment where no one actor dominates the process of creating or fielding such software. He will attempt a mostly positive analysis of the social consequences of these assumptions, both during and after a transition to a world dominated by AI. While this is hardly the universe of all desired analyses, it does seem to cover a non-trivial fraction of interesting cases.

I and they see value in such an analysis even if AI software ends up differing systematically from the software we’ve seen so far:

While we do not believe that the class of scenarios that Professor Hanson will be analyzing is necessarily the most likely way for future AI development to play out, we expect his research to contribute a significant amount of useful data collection and analysis that might be valuable to our thinking about AI more generally, as well as provide a model for other people to follow when performing similar analyses of other AI scenarios of interest.

My idea is to extract from our decades of experience with software a more detailed description of the basic economics of software production and use. To distinguish, as time allows, many different kinds of inputs to production, styles of production, parts of produced products, and types of uses. And then to sketch out different rough “production functions” appropriate to different cases. That is, to begin to translate basic software engineering insight into economics language.

The simple assumption that software doesn’t fundamentally change in the future is the baseline scenario, to be fed into standard economic models to see what happens when such a more richly described software sector slowly grows to take over the economy. But a richer more detailed description of software economics can also give people a vocabulary for describing their alternative hypotheses about how software will change. And then this analysis framework can be adjusted to explore such alternative hypotheses.

So right from the start I’d like to offer this challenge:

Do you believe that the software that will let machines eventually do pretty much all jobs better than humans (or ems) will differ in foreseeable systematic ways from the software we have seen in the last seventy years of software practice? If so, please express your difference hypothesis as clearly as possible in terminology that would be understandable and familiar to software engineers and/or economists.

I will try to stretch the economic descriptions of software that I develop in the direction of encompassing the most common such hypotheses I find.

GD Star Rating
loading...
Tagged as: , , ,

Rating Ems vs AIs

Everyone without exception believes his own native customs, and the religion he was brought up in, to be the best (Herodotus 440BC).

I’ve given about sixty talks so far on the subject of my book The Age of Em. A common response is to compare my scenario to one where instead of ems, it is non-emulation-based software that can first replace humans on most all jobs. While some want to argue about which tech may come first, most prefer to evaluate which tech they want to come first.

Most who compare to ems to non-em-AI seem to prefer the latter. Some say they are concerned because they see ems as having a lower quality of life than we do today (more on that below). But honestly I mostly hear about humans losing status. Even though both meat humans and ems can both be seen as our descendants, people identify more with meat as “us” and see ems as “them.” So they lament meat no longer being the top dog in-charge center-of-attention.

The two scenarios have many similarities. In both scenarios, meat humans must all retire, and robots take over managing the complex details of this new world, which humans are too slow, distant, and stupid to manage. The world economy can grow very fast, letting meat get collectively very rich, and which meat soon starves depends mostly on how well meat insures and shares among themselves. But it is hard to offer much assurance of long run stability, as the world can plausibly change so fast.

Ems, however, seem more threatening to status than other kinds of sophisticated capable machinery. You can more vividly imagine ems more clearly winning the traditional contests whereby humans compete for status, and then afterward acting superior, such as by laughing at meat humans. In contrast, other machines can be so alien that we may not be tempted to make status comparisons with them.

If, in contrast, your complaint about the em world is that ems have a lower quality of life, then you have to either care about something more like an average quality of life, or you have to argue that the em quality of life is below some sort of “zero”, i.e., the minimum required for a life to be worth living (or having existed). And this seems to me a hard case to make.

Oh I can see you thinking that em lives aren’t as good as yours; pretty much all cultures find ways to see their culture as superior. But unless you argue that em lives are much worse than the typical human life in history, then either you must say the typical human life was not worth living, or you must accept em lives as worth living. And if you claim that the main human lives that have been worth living are those in your culture, I’ll shake my head at your incredible cultural arrogance.

(Yes, some like Nick Bostrom in Superintelligence, focus on which scenario reduces existential risk. But even he at one point says “On balance, it looks like the risk of an AI transition would be reduced if whole brain emulation comes before AI,” and in the end he can’t seem to rank these choices.)

GD Star Rating
loading...
Tagged as: , ,

The Labor-From-Factories Explosion

As I’ve discussed before, including in my book, the history of humanity so far can be roughly summarized as a sequence of three exponential growth modes: foragers with culture started a few million years ago, farming started about ten thousand years ago, and industry starting a few hundred years ago. Doubling times got progressively shorter: a quarter million years, then a millennia, and now fifteen years. Each time the transition lasted less than a previously doubling time, and roughly similar numbers of humans have lived during each era.

Before humans, animal brains brains grew exponentially, but even more slowly, doubling about every thirty million years, starting about a half billion years ago. And before that, genomes seem to have doubled exponentially about every half billion years, starting about ten billion years ago.

What if the number of doublings in the current mode, and in the mode that follows it, are comparable to the number of doublings in the last few modes? What if the sharpness of the next transition is comparable to the sharpness if the last few transitions, and what if the factor by which the doubling time changes next time is comparable to the last few factors. Given these assumptions, the next transition will happen sometime in roughly the next century. Within a period of five years, the economy will be doubling every month or faster. And that new mode will only last a year or so before something else changes.

To summarize, usually in history we see relatively steady exponential growth. But five times so far, steady growth has been disturbed by a rapid transition to a much faster rate of growth. It isn’t crazy to think that this might happen again.

Plausibly, new faster exponential modes appear when a feedback loop that was previously limited and blocked becomes is unlocked and strong. And so one way to think about what might cause the next faster mode after ours is to look for plausible feedback loops. However, if there thousands of possible factors that matter for growth and progress, then there are literally millions of possible feedback loops.

For example, denser cities should innovate more, and more innovation can find better ways to make buildings taller, and thus increase city density. More better tutorial videos make it easier to learn varied skills, and some of those skills help to make more better tutorial videos. We can go all day making up stories like these.

But as we have only ever seen maybe five of these transitions in all of history, powerful feedback loops whose unlocking causes a huge growth rate jump must be extremely rare. The vast majority of feedback loops do not create such a huge jump when unlocked. So just because you can imagine a currently locked feedback loop does not make unlocking it likely to cause the next great change.

Many people lately have fixated on one particular possible feedback loop: an “intelligence explosion.”  The more intelligence a creature is, the more it is able to change creatures like itself to become more intelligent. But if you mean something more specific than “mental goodness” by “intelligence”, then this remains only one of thousands of possibilities. So you need strong additional arguments to see this feedback loop as more likely than all the others. And the mere fact that you can imagine this feedback being positive is not remotely enough.

It turns out that we already know of an upcoming transition of a magnitude similar to the previous transitions, scheduled to arrive roughly when prior trends led us to expect a new transition. This explosion is due to labor-from-factories.

Today we can grow physical capital very fast in factories, usually doubling capital on a scale ranging from a few weeks to a few months, but we grow human workers much more slowly. Since capital isn’t useful without more workers, we are forced to grow today mainly via innovation. But if in the future we find a way to make substitutes for almost all human workers in factories, the economy can grow much faster. This is called an AK model, and standard growth theory says it is plausible that this could let the economy double every month or so.

So if it is plausible that artificial intelligence as capable as humans will appear in the next century or so, then we already know what will cause the next great jump to a faster growth mode. Unless of course some other rare powerful feedback loop is unlocked before then. But if an intelligence explosion isn’t  possible until you have machines at least as smart as humans, then that scenario won’t happen until after labor-from-factories. And even then it is far from obvious that feedback can cause one of the few rare big growth rate jumps.

GD Star Rating
loading...
Tagged as: , , ,

How Different AGI Software?

My ex-co-blogger Eliezer Yudkowsky recently made a Facebook post saying that recent AI Go progress confirmed his predictions from our foom debate. He and I then discussed this there, and I thought I’d summarize my resulting point of view here.

Today an individual firm can often innovate well in one of its products via a small team that keeps its work secret and shares little with other competing teams. Such innovations can be lumpy in the sense that gain relative to effort varies over a wide range, and a single innovation can sometimes make a big difference to product value.

However big lumps are rare; typically most value gained is via many small lumps rather than a few big ones. Most innovation comes from detailed practice, rather than targeted research, and abstract theory contributes only a small fraction. Innovations vary in their generality, and this contributes to the variation in innovation lumpiness. For example, a better washing machine can better wash many kinds of clothes.

If instead of looking at individual firms we look at nations as a whole, the picture changes because a nation is an aggregation of activities across a great many firm teams. While one firm can do well with a secret innovation team that doesn’t share, a big nation would hurt itself a lot by closing its borders to stop sharing with other nations. Single innovations make a much smaller difference to nations as a whole then they do to individual products. So nations grow much more steadily than do firms.

All of these patterns apply not just to products in general, but also to the subcategory of software. While some of our most general innovations may be in software, most software innovation is still made of many small lumps. Software that is broadly capable, such as a tool-filled operating system, is created by much larger teams, and particular innovations make less of a difference to its overall performance. Most software is created via tools that are shared with many other teams of software developers.

From an economic point of view, a near-human-level “artificial general intelligence” (AGI) would be a software system with a near-human level competence across almost the entire range of mental tasks that matter to an economy. This is a wide range, much more like scope of abilities found in a nation than those found in a firm. In contrast, an AI Go program has a far more limited range of abilities, more like those found in typical software products. So even if the recent Go program was made by a small team and embodies lumpy performance gains, it is not obviously a significant outlier relative to the usual pattern in software.

It seems to me that the key claim made by Eliezer Yudkowsky, and others who predict a local foom scenario, is that our experience in both ordinary products in general and software in particular is misleading regarding the type of software that will eventually contribute most to the first human-level AGI. In products and software, we have observed a certain joint distribution over innovation scope, cost, value, team size, and team sharing. And if that were also the distribution behind the first human-level AGI software, then we should predict that it will be made via a great many people in a great many teams, probably across a great many firms, with lots of sharing across this wide scope. No one team or firm would be very far in advance of the others.

However, the key local foom claim is that there is some way for small teams that share little to produce innovations with far more generality and lumpiness than these previous distributions suggests, perhaps due to being based more on math and basic theory. This would increase the chances that a small team could create a program that grabs a big fraction of world income, and keeps that advantage for an important length of time.

Presumably the basis for this claim is that some people think they see a different distribution among some subset of AI software, perhaps including machine learning software. I don’t see it yet, but the obvious way for them to convince skeptics like me is to create and analyze a formal dataset of software projects and innovations. Show us a significantly-deviating subset of AI programs with more economic scope, generality, and lumpiness in gains. Statistics from such an analysis could let us numerically estimate the chances of a single small team encompassing a big fraction of AGI software power and value.

That is, we might estimate the chances of local foom. Which I’ve said isn’t zero; I’ve instead just suggested that foom has gained too much attention relative to its importance.

GD Star Rating
loading...
Tagged as: , ,

Missing Engagement

On the surface, there seems to have been a big debate over the last few years on how fast automation will displace jobs over the next decade or so. Some have claimed very rapid displacement, much faster than we’ve seen in recent decades (or centuries). Others have been skeptical (like me here, here, here, and here).

On October 13, David Mindell, Professor at MIT of both Aeronautics and Astronautics, and also History of Engineering and Manufacturing weighed in on this debate, publishing Our Robots, Ourselves: Robotics and the Myths of Autonomy:

If robotics in extreme environments are any guide, Mindell says, self-driving cars should not be fully self-driving. That idea, he notes, is belied by decades of examples involving spacecraft, underwater exploration, air travel, and more. In each of those spheres, fully automated vehicles have frequently been promised, yet the most state-of-the-art products still have a driver or pilot somewhere in the network. This is one reason Mindell thinks cars are not on the road to complete automation. ..

“There’s an idea that progress in robotics leads to full autonomy. That may be a valuable idea to guide research … but when automated and autonomous systems get into the real world, that’s not the direction they head. We need to rethink the notion of progress, not as progress toward full autonomy, but as progress toward trusted, transparent, reliable, safe autonomy that is fully interactive: The car does what I want it to do, and only when I want it to do it.” (more)

In his book, Mindell expertly supports his position with a detailed review of the history of automation in planes, spacecraft and submarines. You might think than Mindell’s prestige, expertise, and detailed book on past automation rates and patterns would earn him a place in this debate on future rates of automation progress. Many of those who blurbed the book clearly think so:

“Mindell’s ingenious and profoundly original book will enlighten those who prophesy that robots will soon make us redundant.”—David Autor

“My thanks to the author for bringing scholarship and sanity to a debate which has run off into a magic la-la land in the popular press.”—Rodney Brooks

But looking over dozens of reviews Mindell’s book in the 75 days since it was published, I find no thoughtful response from the other side! None. No one who expects rapid automation progress has bothered to even outline why they find Mindell’s arguments unpersuasive.

Perhaps this shows that people on the other side know Mindell’s arguments to be solid, making any response unpersuasive, and so they’d rather ignore him. Maybe they just don’t think the past is any guide to the future, at least in automation, making Mindell’s discussion of the past irrelevant to the debate. I’ve known people who think this way.

But perhaps a more plausible interpretation is that on subjects like this in our intellectual world, usually there just is no “debate”; there are just different sides who separately market their points of view. Just as in ordinary marketing, where firms usually pitch their products without mentioning competing products, intellectuals marketing of points of view also usually ignore competing points of view. Instead of pointing out contrary arguments and rebutting them, intellectual usually prefer to ignore contrary arguments.

This seems a sad state of affairs with respect to intellectual progress. But of course such progress is a public good, where individual contributions must trade a personal cost against a collective benefit, encouraging each of us to free-ride on the efforts of others. We might create intellectual institutions that better encourage more engagement with and response to contrary arguments, but unless these are global institutions others may prefer to free-ride and not contribute to local institutions.

You might think that academic norms of discourse are such global institutions encouraging engagement. And academics do give much lip service to that idea. But in fact it is mostly empty talk; academics don’t actually encourage much engagement and response beyond the narrow scope of prestigious folks in the same academic discipline.

GD Star Rating
loading...
Tagged as: , , ,

Ford’s Rise of Robots

In the April issue of Reason magazine I review Martin Ford’s new book Rise of the Robots:

Basically, Ford sees a robotic catastrophe coming soon because he sees disturbing signs of the times: inequality, job loss, and so many impressive demos. It’s as if he can feel it in his bones: Dark things are coming! We know robots will eventually take most jobs, so this must be now. … [But] In the end, it seems that Martin Ford’s main issue really is that he dislikes the increase in inequality and wants more taxes to fund a basic income guarantee. All that stuff about robots is a distraction. (more)

I’ll admit Ford is hardly alone, and he ably summarizes what are quite common views. Even so, I’m skeptical.

GD Star Rating
loading...
Tagged as: ,

AI Boom Bet Offers

A month ago I mentioned that lots of folks are now saying “this time is different” – we’ll soon see a big increase in jobs lost to automation, even though we’ve heard such warnings every few decades for centuries. Recently Elon Musk joined in:

The risk of something seriously dangerous happening is in the five year timeframe … 10 years at most.

If new software will soon let computers take over many more jobs, that should greatly increase the demand for such software. And it should greatly increase the demand for computer hardware, which is a strong complement to software. So we should see a big increase in the quantity of computer hardware purchased. The US BEA has been tracking the fraction of the US economy devoted to computer and electronics hardware. That fraction was 2.3% in 1997, 1.7% in 2003, and 1.58% in 2008, and 1.56% in 2012. I offer to bet that this number won’t rise above 5% by 2025. And I’ll give 20-1 odds! So far, I have no takers.

The US BLS tracks the US labor share of income, which has fallen from 64% to 58% in the last decade, a clear deviation from prior trends. I don’t think this fall is mainly due to automation, and I think it may continue to fall for those other reasons. Even so, I think this figure rather unlikely to fall below 40% by 2025. So I bet Chris Hallquist at 12-1 odds against this (my $1200 to his $100).

Yes it would be better to bet on software demand directly, and on world stats, not just US stats. But these stats seem hard to find.

Added 3p: US CS/Eng college majors were: 6.5% in ’70, 9.7% in ’80, 9.6% in ’90, 9.4% in ’00, 7.9% in ’10. I’ll give 8-1 odds against > 15% by 2025. US CS majors were: 2.4K in ’70, 15K in ’80, 25K in ’90, 44K in ’00, 59K in ’03, 43K in ’10 (out of 1716K total grads). I’ll give 10-1 against > 200K by 2025.

Added 9Dec: On twitter @harryh accepted my 20-1 bet for $50. And Sam beats my offer: 

GD Star Rating
loading...
Tagged as: , , ,

This Time Isn’t Different

~1983 I read two articles that inspired me to change my career. One was by Ted Nelson on hypertext publishing, and the other by Doug Lenat on artificial intelligence. So I quit my U. of Chicago physics Ph.D. program and headed to Silicon Valley, for a job doing AI at Lockheed, and a hobby doing hypertext with Nelson’s Xanadu group.

A few years later, ~1986, I penned the following parable on AI research:

COMPLETE FICTION by Robin Hanson

Once upon a time, in a kingdom nothing like our own, gold was very scarce, forcing jewelers to try and sell little tiny gold rings and bracelets. Then one day a PROSPECTOR came into the capitol sporting a large gold nugget he found in a hill to the west. As the word went out that there was “gold in them thar hills”, the king decided to take an active management role. He appointed a “gold task force” which one year later told the king “you must spend lots of money to find gold, lest your enemies get richer than you.”

So a “gold center” was formed, staffed with many spiffy looking Ph.D types who had recently published papers on gold (remarkably similar to their earlier papers on silver). Experienced prospectors had been interviewed, but they smelled and did not have a good grasp of gold theory.

The center bought a large number of state of the art bulldozers and took them to a large field they had found that was both easy to drive on and freeway accessible. After a week of sore rumps, getting dirty, and not finding anything, they decided they could best help the gold cause by researching better tools.

So they set up some demo sand hills in clear view of the king’s castle and stuffed them with nicely polished gold bars. Then they split into various research projects, such as “bigger diggers”, for handling gold boulders if they found any, and “timber-gold alloys’, for making houses from the stuff when gold eventually became plentiful.

After a while the town barons complained loud enough and also got some gold research money. The lion’s share was allocated to the most politically powerful barons, who assigned it to looking for gold in places where it would be very convenient to find it, such as in rich jewelers’ backyards. A few bulldozers, bought from smiling bulldozer salespeople wearing “Gold is the Future” buttons, were time shared across the land. Searchers who, in their alloted three days per month of bulldozer time, could just not find anything in the backyards of “gold committed” jewelers were admonished to search harder next month.

The smart money understood that bulldozers were the best digging tool, even though they were expensive and hard to use. Some backward prospector types, however, persisted in panning for gold in secluded streams. Though they did have some success, gold theorists knew that this was due to dumb luck and the incorporation of advanced bulldozer research ideas in later pan designs.

After many years of little success, the king got fed up and cut off all gold funding. The center people quickly unearthed their papers which had said so all along. The end.

P.S. There really was gold in them thar hills. Still is.

As you can see, I had become disillusioned on academic research, but still suffered youthful over-optimism on near-term A.I. prospects.

I’ve since learned that we’ve seen “booms” like the one I was caught up in then every few decades for centuries. In each boom many loudly declare high expectations and concern regarding rapid near-term progress in automation. “The machines are finally going to soon put everyone out of work!” Which of course they don’t. We’ve instead seen a pretty slow & steady rate of humans displaced by machines on jobs.

Today we are in another such boom. For example, David Brooks recently parroted Kevin Kelley saying this time is different because now we have cheaper hardware, better algorithms, and more data. But those facts were also true in most of the previous booms; nothing has fundamentally changed! In truth, we remain a very long way from being able to automate all jobs, and we should expect the slow steady rate of job displacement to long continue.

One way to understand this is in terms of the distribution over human jobs of how good machines need to be to displace humans. If this parameter is distributed somewhat evenly over many orders of magnitude, then continued steady exponential progress in machine abilities should continue to translate into only slow incremental displacement of human jobs. Yes machines are vastly better than they were before, but they must get far more vastly better to displace most human workers.

GD Star Rating
loading...
Tagged as: ,