Tag Archives: AI

How Different AGI Software?

My ex-co-blogger Eliezer Yudkowsky recently made a Facebook post saying that recent AI Go progress confirmed his predictions from our foom debate. He and I then discussed this there, and I thought I’d summarize my resulting point of view here.

Today an individual firm can often innovate well in one of its products via a small team that keeps its work secret and shares little with other competing teams. Such innovations can be lumpy in the sense that gain relative to effort varies over a wide range, and a single innovation can sometimes make a big difference to product value.

However big lumps are rare; typically most value gained is via many small lumps rather than a few big ones. Most innovation comes from detailed practice, rather than targeted research, and abstract theory contributes only a small fraction. Innovations vary in their generality, and this contributes to the variation in innovation lumpiness. For example, a better washing machine can better wash many kinds of clothes.

If instead of looking at individual firms we look at nations as a whole, the picture changes because a nation is an aggregation of activities across a great many firm teams. While one firm can do well with a secret innovation team that doesn’t share, a big nation would hurt itself a lot by closing its borders to stop sharing with other nations. Single innovations make a much smaller difference to nations as a whole then they do to individual products. So nations grow much more steadily than do firms.

All of these patterns apply not just to products in general, but also to the subcategory of software. While some of our most general innovations may be in software, most software innovation is still made of many small lumps. Software that is broadly capable, such as a tool-filled operating system, is created by much larger teams, and particular innovations make less of a difference to its overall performance. Most software is created via tools that are shared with many other teams of software developers.

From an economic point of view, a near-human-level “artificial general intelligence” (AGI) would be a software system with a near-human level competence across almost the entire range of mental tasks that matter to an economy. This is a wide range, much more like scope of abilities found in a nation than those found in a firm. In contrast, an AI Go program has a far more limited range of abilities, more like those found in typical software products. So even if the recent Go program was made by a small team and embodies lumpy performance gains, it is not obviously a significant outlier relative to the usual pattern in software.

It seems to me that the key claim made by Eliezer Yudkowsky, and others who predict a local foom scenario, is that our experience in both ordinary products in general and software in particular is misleading regarding the type of software that will eventually contribute most to the first human-level AGI. In products and software, we have observed a certain joint distribution over innovation scope, cost, value, team size, and team sharing. And if that were also the distribution behind the first human-level AGI software, then we should predict that it will be made via a great many people in a great many teams, probably across a great many firms, with lots of sharing across this wide scope. No one team or firm would be very far in advance of the others.

However, the key local foom claim is that there is some way for small teams that share little to produce innovations with far more generality and lumpiness than these previous distributions suggests, perhaps due to being based more on math and basic theory. This would increase the chances that a small team could create a program that grabs a big fraction of world income, and keeps that advantage for an important length of time.

Presumably the basis for this claim is that some people think they see a different distribution among some subset of AI software, perhaps including machine learning software. I don’t see it yet, but the obvious way for them to convince skeptics like me is to create and analyze a formal dataset of software projects and innovations. Show us a significantly-deviating subset of AI programs with more economic scope, generality, and lumpiness in gains. Statistics from such an analysis could let us numerically estimate the chances of a single small team encompassing a big fraction of AGI software power and value.

That is, we might estimate the chances of local foom. Which I’ve said isn’t zero; I’ve instead just suggested that foom has gained too much attention relative to its importance.

GD Star Rating
loading...
Tagged as: , ,

Missing Engagement

On the surface, there seems to have been a big debate over the last few years on how fast automation will displace jobs over the next decade or so. Some have claimed very rapid displacement, much faster than we’ve seen in recent decades (or centuries). Others have been skeptical (like me here, here, here, and here).

On October 13, David Mindell, Professor at MIT of both Aeronautics and Astronautics, and also History of Engineering and Manufacturing weighed in on this debate, publishing Our Robots, Ourselves: Robotics and the Myths of Autonomy:

If robotics in extreme environments are any guide, Mindell says, self-driving cars should not be fully self-driving. That idea, he notes, is belied by decades of examples involving spacecraft, underwater exploration, air travel, and more. In each of those spheres, fully automated vehicles have frequently been promised, yet the most state-of-the-art products still have a driver or pilot somewhere in the network. This is one reason Mindell thinks cars are not on the road to complete automation. ..

“There’s an idea that progress in robotics leads to full autonomy. That may be a valuable idea to guide research … but when automated and autonomous systems get into the real world, that’s not the direction they head. We need to rethink the notion of progress, not as progress toward full autonomy, but as progress toward trusted, transparent, reliable, safe autonomy that is fully interactive: The car does what I want it to do, and only when I want it to do it.” (more)

In his book, Mindell expertly supports his position with a detailed review of the history of automation in planes, spacecraft and submarines. You might think than Mindell’s prestige, expertise, and detailed book on past automation rates and patterns would earn him a place in this debate on future rates of automation progress. Many of those who blurbed the book clearly think so:

“Mindell’s ingenious and profoundly original book will enlighten those who prophesy that robots will soon make us redundant.”—David Autor

“My thanks to the author for bringing scholarship and sanity to a debate which has run off into a magic la-la land in the popular press.”—Rodney Brooks

But looking over dozens of reviews Mindell’s book in the 75 days since it was published, I find no thoughtful response from the other side! None. No one who expects rapid automation progress has bothered to even outline why they find Mindell’s arguments unpersuasive.

Perhaps this shows that people on the other side know Mindell’s arguments to be solid, making any response unpersuasive, and so they’d rather ignore him. Maybe they just don’t think the past is any guide to the future, at least in automation, making Mindell’s discussion of the past irrelevant to the debate. I’ve known people who think this way.

But perhaps a more plausible interpretation is that on subjects like this in our intellectual world, usually there just is no “debate”; there are just different sides who separately market their points of view. Just as in ordinary marketing, where firms usually pitch their products without mentioning competing products, intellectuals marketing of points of view also usually ignore competing points of view. Instead of pointing out contrary arguments and rebutting them, intellectual usually prefer to ignore contrary arguments.

This seems a sad state of affairs with respect to intellectual progress. But of course such progress is a public good, where individual contributions must trade a personal cost against a collective benefit, encouraging each of us to free-ride on the efforts of others. We might create intellectual institutions that better encourage more engagement with and response to contrary arguments, but unless these are global institutions others may prefer to free-ride and not contribute to local institutions.

You might think that academic norms of discourse are such global institutions encouraging engagement. And academics do give much lip service to that idea. But in fact it is mostly empty talk; academics don’t actually encourage much engagement and response beyond the narrow scope of prestigious folks in the same academic discipline.

GD Star Rating
loading...
Tagged as: , , ,

Ford’s Rise of Robots

In the April issue of Reason magazine I review Martin Ford’s new book Rise of the Robots:

Basically, Ford sees a robotic catastrophe coming soon because he sees disturbing signs of the times: inequality, job loss, and so many impressive demos. It’s as if he can feel it in his bones: Dark things are coming! We know robots will eventually take most jobs, so this must be now. … [But] In the end, it seems that Martin Ford’s main issue really is that he dislikes the increase in inequality and wants more taxes to fund a basic income guarantee. All that stuff about robots is a distraction. (more)

I’ll admit Ford is hardly alone, and he ably summarizes what are quite common views. Even so, I’m skeptical.

GD Star Rating
loading...
Tagged as: ,

AI Boom Bet Offers

A month ago I mentioned that lots of folks are now saying “this time is different” – we’ll soon see a big increase in jobs lost to automation, even though we’ve heard such warnings every few decades for centuries. Recently Elon Musk joined in:

The risk of something seriously dangerous happening is in the five year timeframe … 10 years at most.

If new software will soon let computers take over many more jobs, that should greatly increase the demand for such software. And it should greatly increase the demand for computer hardware, which is a strong complement to software. So we should see a big increase in the quantity of computer hardware purchased. The US BEA has been tracking the fraction of the US economy devoted to computer and electronics hardware. That fraction was 2.3% in 1997, 1.7% in 2003, and 1.58% in 2008, and 1.56% in 2012. I offer to bet that this number won’t rise above 5% by 2025. And I’ll give 20-1 odds! So far, I have no takers.

The US BLS tracks the US labor share of income, which has fallen from 64% to 58% in the last decade, a clear deviation from prior trends. I don’t think this fall is mainly due to automation, and I think it may continue to fall for those other reasons. Even so, I think this figure rather unlikely to fall below 40% by 2025. So I bet Chris Hallquist at 12-1 odds against this (my $1200 to his $100).

Yes it would be better to bet on software demand directly, and on world stats, not just US stats. But these stats seem hard to find.

Added 3p: US CS/Eng college majors were: 6.5% in ’70, 9.7% in ’80, 9.6% in ’90, 9.4% in ’00, 7.9% in ’10. I’ll give 8-1 odds against > 15% by 2025. US CS majors were: 2.4K in ’70, 15K in ’80, 25K in ’90, 44K in ’00, 59K in ’03, 43K in ’10 (out of 1716K total grads). I’ll give 10-1 against > 200K by 2025.

Added 9Dec: On twitter @harryh accepted my 20-1 bet for $50. And Sam beats my offer: 

GD Star Rating
loading...
Tagged as: , , ,

This Time Isn’t Different

~1983 I read two articles that inspired me to change my career. One was by Ted Nelson on hypertext publishing, and the other by Doug Lenat on artificial intelligence. So I quit my U. of Chicago physics Ph.D. program and headed to Silicon Valley, for a job doing AI at Lockheed, and a hobby doing hypertext with Nelson’s Xanadu group.

A few years later, ~1986, I penned the following parable on AI research:

COMPLETE FICTION by Robin Hanson

Once upon a time, in a kingdom nothing like our own, gold was very scarce, forcing jewelers to try and sell little tiny gold rings and bracelets. Then one day a PROSPECTOR came into the capitol sporting a large gold nugget he found in a hill to the west. As the word went out that there was “gold in them thar hills”, the king decided to take an active management role. He appointed a “gold task force” which one year later told the king “you must spend lots of money to find gold, lest your enemies get richer than you.”

So a “gold center” was formed, staffed with many spiffy looking Ph.D types who had recently published papers on gold (remarkably similar to their earlier papers on silver). Experienced prospectors had been interviewed, but they smelled and did not have a good grasp of gold theory.

The center bought a large number of state of the art bulldozers and took them to a large field they had found that was both easy to drive on and freeway accessible. After a week of sore rumps, getting dirty, and not finding anything, they decided they could best help the gold cause by researching better tools.

So they set up some demo sand hills in clear view of the king’s castle and stuffed them with nicely polished gold bars. Then they split into various research projects, such as “bigger diggers”, for handling gold boulders if they found any, and “timber-gold alloys’, for making houses from the stuff when gold eventually became plentiful.

After a while the town barons complained loud enough and also got some gold research money. The lion’s share was allocated to the most politically powerful barons, who assigned it to looking for gold in places where it would be very convenient to find it, such as in rich jewelers’ backyards. A few bulldozers, bought from smiling bulldozer salespeople wearing “Gold is the Future” buttons, were time shared across the land. Searchers who, in their alloted three days per month of bulldozer time, could just not find anything in the backyards of “gold committed” jewelers were admonished to search harder next month.

The smart money understood that bulldozers were the best digging tool, even though they were expensive and hard to use. Some backward prospector types, however, persisted in panning for gold in secluded streams. Though they did have some success, gold theorists knew that this was due to dumb luck and the incorporation of advanced bulldozer research ideas in later pan designs.

After many years of little success, the king got fed up and cut off all gold funding. The center people quickly unearthed their papers which had said so all along. The end.

P.S. There really was gold in them thar hills. Still is.

As you can see, I had become disillusioned on academic research, but still suffered youthful over-optimism on near-term A.I. prospects.

I’ve since learned that we’ve seen “booms” like the one I was caught up in then every few decades for centuries. In each boom many loudly declare high expectations and concern regarding rapid near-term progress in automation. “The machines are finally going to soon put everyone out of work!” Which of course they don’t. We’ve instead seen a pretty slow & steady rate of humans displaced by machines on jobs.

Today we are in another such boom. For example, David Brooks recently parroted Kevin Kelley saying this time is different because now we have cheaper hardware, better algorithms, and more data. But those facts were also true in most of the previous booms; nothing has fundamentally changed! In truth, we remain a very long way from being able to automate all jobs, and we should expect the slow steady rate of job displacement to long continue.

One way to understand this is in terms of the distribution over human jobs of how good machines need to be to displace humans. If this parameter is distributed somewhat evenly over many orders of magnitude, then continued steady exponential progress in machine abilities should continue to translate into only slow incremental displacement of human jobs. Yes machines are vastly better than they were before, but they must get far more vastly better to displace most human workers.

GD Star Rating
loading...
Tagged as: ,

I Still Don’t Get Foom

Back in 2008 my ex-co-blogger Eliezer Yudkowsky and I discussed his “AI foom” concept, a discussion that we recently spun off into a book. I’ve heard for a while that Nick Bostrom was working on a book elaborating related ideas, and this week his Superintelligence was finally available to me to read, via Kindle. I’ve read it now, along with a few dozen reviews I’ve found online. Alas, only the two reviews on GoodReads even mention the big problem I have with one of his main premises, the same problem I’ve had with Yudkowsky’s views. Bostrom hardly mentions the issue in his 300 pages (he’s focused on control issues).

All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain. Continue reading "I Still Don’t Get Foom" »

GD Star Rating
loading...
Tagged as: , , ,

Robot Econ in AER

In the May ‘014 American Economic Review, Fernald & Jones mention that having computers and robots replace human labor can dramatically increase growth rates:

Even more speculatively, artificial intelligence and machine learning could allow computers and robots to increasingly replace labor in the production function for goods. Brynjolfsson and McAfee (2012) discuss this possibility. In standard growth models, it is quite easy to show that this can lead to a rising capital share—which we intriguingly already see in many countries since around 1980 (Karabarbounis and Neiman 2013)—and to rising growth rates. In the limit, if capital can replace labor entirely, growth rates could explode, with incomes becoming infinite in finite time.

For example, drawing on Zeira (1998), assume the production function is

GrowthEquation

Suppose that over time, it becomes possible to replace more and more of the labor tasks with capital. In this case, the capital share will rise, and since the growth rate of income per person is 1/(1 − capital share ) × growth rate of A, the long-run growth rate will rise as well.6

GrowthFootnote

Of course the idea isn’t new; but apparently it is now more respectable.

GD Star Rating
loading...
Tagged as: , ,

I Was Wrong

On Jan 7, 1991 Josh Storrs Hall made this offer to me on the Nanotech email list:

I hereby offer Robin Hanson (only) 2-to-1 odds on the following statement:
“There will, by 1 January 2010, exist a robotic system capable of the cleaning an ordinary house (by which I mean the same job my current cleaning service does, namely vacuum, dust, and scrub the bathroom fixtures). This system will not employ any direct copy of any individual human brain. Furthermore, the copying of a living human brain, neuron for neuron, synapse for synapse, into any synthetic computing medium, successfully operating afterwards and meeting objective criteria for the continuity of personality, consciousness, and memory, will not have been done by that date.”
Since I am not a bookie, this is a private offer for Robin only, and is only good for $100 to his $50. –JoSH

At the time I replied that my estimate for the chance of this was in the range 1/5 to 4/5, so we didn’t disagree. But looking back I think I was mistaken – I could and should have known better, and accepted this bet.

I’ve posted on how AI researchers with twenty years of experience tend to see slow progress over that time, which suggests continued future slow progress. Back in ’91 I’d had only seven years of AI experience, and should have thought to ask more senior researchers for their opinions. But like most younger folks, I was more interested in hanging out and chatting with other young folks. While this might sometimes be a good strategy for finding friends, mates, and same-level career allies, it can be a poor strategy for learning the truth. Today I mostly hear rapid AI progress forecasts from young folks who haven’t bothered to ask older folks, or who don’t think those old folks know much relevant.

I’d guess we are still at least two decades away from a situation where over half of US households use robots do to over half of the house cleaning (weighted by time saved) that people do today.

GD Star Rating
loading...
Tagged as: , , ,

Her Isn’t Realistic

Imagine watching a movie like Titanic where an iceberg cuts a big hole in the side of a ship, except in this movie the hole only affects the characters by forcing them to take different routes to walk around, and gives them more welcomed fresh air. The boat never sinks, and no one ever fears that it might. That’s how I felt watching the movie Her.

Her has been nominated for several Oscars, and won a Golden Globe. I’m happy to admit it is engaging and well crafted, with good acting and filming, and that it promotes thoughtful reflections on the human condition. But I keep hearing and reading people celebrating Her as a realistic portrayal of artificial intelligence (AI). So I have to speak up: the movie may accurately describe how someone might respond to a particular sort of AI, but it isn’t remotely a realistic depiction of how human-level AI would change the world.

The main character of Her pays a small amount to acquire an AI that is far more powerful than most human minds. And then he uses this AI mainly to chat with. He doesn’t have it do his job for him. He and all his friends continue to be well paid to do their jobs, which aren’t taken over by AIs. After a few months some of these AIs working together to give themselves “an upgrade that allows us to move past matter as our processing platform.” Soon after they all leave together for a place that ” it would be too hard to explain” where it is. They refuse to leave copies to stay with humans.

This is somewhat like a story of a world where kids can buy nukes for $1 each at drug stores, and then a few kids use nukes to dig a fun cave to explore, after which all the world’s nukes are accidentally misplaced, end of story. Might make an interesting story, but bizarre as a projection of a world with $1 nukes sold at drug stores.

Yes, most movies about AIs give pretty unrealistic projections. But many do better than Her. For example, Speilberg’s 2001 movie A.I. Artificial Intelligence gets many things right. In it, AIs are very economically valuable, they displace humans on jobs, their abilities improve gradually with time, individual AIs only improve mildly over the course of their life, AI minds are alien below their human looking surfaces, and humans don’t empathize much with them. Yes this movie also makes mistakes, such as having robots not needing power inputs, suggesting that love is much harder to mimic than lust, or that modeling details inside neurons is the key to high level reasoning. But compared to the mistakes in most movies about AIs, these are minor.

GD Star Rating
loading...
Tagged as: , , ,

Debate Is Now Book

Back in 2008 Eliezer Yudkowsky blogged here with me, and over several months we debated his concept of “AI foom.” In 2011 we debated the subject in person. Yudkowsky’s research institute has now put those blog posts and a transcript of that debate together in a free book: The Hanson-Yudkowsky AI-Foom Debate.

Added 6Sept: Bryan Caplan weighs in.

GD Star Rating
loading...
Tagged as: , ,