Tag Archives: AI

AI Boom Bet Offers

A month ago I mentioned that lots of folks are now saying “this time is different” – we’ll soon see a big increase in jobs lost to automation, even though we’ve heard such warnings every few decades for centuries. Recently Elon Musk joined in:

The risk of something seriously dangerous happening is in the five year timeframe … 10 years at most.

If new software will soon let computers take over many more jobs, that should greatly increase the demand for such software. And it should greatly increase the demand for computer hardware, which is a strong complement to software. So we should see a big increase in the quantity of computer hardware purchased. The US BEA has been tracking the fraction of the US economy devoted to computer and electronics hardware. That fraction was 2.3% in 1997, 1.7% in 2003, and 1.58% in 2008, and 1.56% in 2012. I offer to bet that this number won’t rise above 5% by 2025. And I’ll give 20-1 odds! So far, I have no takers.

The US BLS tracks the US labor share of income, which has fallen from 64% to 58% in the last decade, a clear deviation from prior trends. I don’t think this fall is mainly due to automation, and I think it may continue to fall for those other reasons. Even so, I think this figure rather unlikely to fall below 40% by 2025. So I bet Chris Hallquist at 12-1 odds against this (my $1200 to his $100).

Yes it would be better to bet on software demand directly, and on world stats, not just US stats. But these stats seem hard to find.

Added 3p: US CS/Eng college majors were: 6.5% in ’70, 9.7% in ’80, 9.6% in ’90, 9.4% in ’00, 7.9% in ’10. I’ll give 8-1 odds against > 15% by 2025. US CS majors were: 2.4K in ’70, 15K in ’80, 25K in ’90, 44K in ’00, 59K in ’03, 43K in ’10 (out of 1716K total grads). I’ll give 10-1 against > 200K by 2025.

Added 9Dec: On twitter @harryh accepted my 20-1 bet for $50. And Sam beats my offer: 

GD Star Rating
loading...
Tagged as: , , ,

This Time Isn’t Different

~1983 I read two articles that inspired me to change my career. One was by Ted Nelson on hypertext publishing, and the other by Doug Lenat on artificial intelligence. So I quit my U. of Chicago physics Ph.D. program and headed to Silicon Valley, for a job doing AI at Lockheed, and a hobby doing hypertext with Nelson’s Xanadu group.

A few years later, ~1986, I penned the following parable on AI research:

COMPLETE FICTION by Robin Hanson

Once upon a time, in a kingdom nothing like our own, gold was very scarce, forcing jewelers to try and sell little tiny gold rings and bracelets. Then one day a PROSPECTOR came into the capitol sporting a large gold nugget he found in a hill to the west. As the word went out that there was “gold in them thar hills”, the king decided to take an active management role. He appointed a “gold task force” which one year later told the king “you must spend lots of money to find gold, lest your enemies get richer than you.”

So a “gold center” was formed, staffed with many spiffy looking Ph.D types who had recently published papers on gold (remarkably similar to their earlier papers on silver). Experienced prospectors had been interviewed, but they smelled and did not have a good grasp of gold theory.

The center bought a large number of state of the art bulldozers and took them to a large field they had found that was both easy to drive on and freeway accessible. After a week of sore rumps, getting dirty, and not finding anything, they decided they could best help the gold cause by researching better tools.

So they set up some demo sand hills in clear view of the king’s castle and stuffed them with nicely polished gold bars. Then they split into various research projects, such as “bigger diggers”, for handling gold boulders if they found any, and “timber-gold alloys’, for making houses from the stuff when gold eventually became plentiful.

After a while the town barons complained loud enough and also got some gold research money. The lion’s share was allocated to the most politically powerful barons, who assigned it to looking for gold in places where it would be very convenient to find it, such as in rich jewelers’ backyards. A few bulldozers, bought from smiling bulldozer salespeople wearing “Gold is the Future” buttons, were time shared across the land. Searchers who, in their alloted three days per month of bulldozer time, could just not find anything in the backyards of “gold committed” jewelers were admonished to search harder next month.

The smart money understood that bulldozers were the best digging tool, even though they were expensive and hard to use. Some backward prospector types, however, persisted in panning for gold in secluded streams. Though they did have some success, gold theorists knew that this was due to dumb luck and the incorporation of advanced bulldozer research ideas in later pan designs.

After many years of little success, the king got fed up and cut off all gold funding. The center people quickly unearthed their papers which had said so all along. The end.

P.S. There really was gold in them thar hills. Still is.

As you can see, I had become disillusioned on academic research, but still suffered youthful over-optimism on near-term A.I. prospects.

I’ve since learned that we’ve seen “booms” like the one I was caught up in then every few decades for centuries. In each boom many loudly declare high expectations and concern regarding rapid near-term progress in automation. “The machines are finally going to soon put everyone out of work!” Which of course they don’t. We’ve instead seen a pretty slow & steady rate of humans displaced by machines on jobs.

Today we are in another such boom. For example, David Brooks recently parroted Kevin Kelley saying this time is different because now we have cheaper hardware, better algorithms, and more data. But those facts were also true in most of the previous booms; nothing has fundamentally changed! In truth, we remain a very long way from being able to automate all jobs, and we should expect the slow steady rate of job displacement to long continue.

One way to understand this is in terms of the distribution over human jobs of how good machines need to be to displace humans. If this parameter is distributed somewhat evenly over many orders of magnitude, then continued steady exponential progress in machine abilities should continue to translate into only slow incremental displacement of human jobs. Yes machines are vastly better than they were before, but they must get far more vastly better to displace most human workers.

GD Star Rating
loading...
Tagged as: ,

I Still Don’t Get Foom

Back in 2008 my ex-co-blogger Eliezer Yudkowsky and I discussed his “AI foom” concept, a discussion that we recently spun off into a book. I’ve heard for a while that Nick Bostrom was working on a book elaborating related ideas, and this week his Superintelligence was finally available to me to read, via Kindle. I’ve read it now, along with a few dozen reviews I’ve found online. Alas, only the two reviews on GoodReads even mention the big problem I have with one of his main premises, the same problem I’ve had with Yudkowsky’s views. Bostrom hardly mentions the issue in his 300 pages (he’s focused on control issues).

All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain. Continue reading "I Still Don’t Get Foom" »

GD Star Rating
loading...
Tagged as: , , ,

Robot Econ in AER

In the May ‘014 American Economic Review, Fernald & Jones mention that having computers and robots replace human labor can dramatically increase growth rates:

Even more speculatively, artificial intelligence and machine learning could allow computers and robots to increasingly replace labor in the production function for goods. Brynjolfsson and McAfee (2012) discuss this possibility. In standard growth models, it is quite easy to show that this can lead to a rising capital share—which we intriguingly already see in many countries since around 1980 (Karabarbounis and Neiman 2013)—and to rising growth rates. In the limit, if capital can replace labor entirely, growth rates could explode, with incomes becoming infinite in finite time.

For example, drawing on Zeira (1998), assume the production function is

GrowthEquation

Suppose that over time, it becomes possible to replace more and more of the labor tasks with capital. In this case, the capital share will rise, and since the growth rate of income per person is 1/(1 − capital share ) × growth rate of A, the long-run growth rate will rise as well.6

GrowthFootnote

Of course the idea isn’t new; but apparently it is now more respectable.

GD Star Rating
loading...
Tagged as: , ,

I Was Wrong

On Jan 7, 1991 Josh Storrs Hall made this offer to me on the Nanotech email list:

I hereby offer Robin Hanson (only) 2-to-1 odds on the following statement:
“There will, by 1 January 2010, exist a robotic system capable of the cleaning an ordinary house (by which I mean the same job my current cleaning service does, namely vacuum, dust, and scrub the bathroom fixtures). This system will not employ any direct copy of any individual human brain. Furthermore, the copying of a living human brain, neuron for neuron, synapse for synapse, into any synthetic computing medium, successfully operating afterwards and meeting objective criteria for the continuity of personality, consciousness, and memory, will not have been done by that date.”
Since I am not a bookie, this is a private offer for Robin only, and is only good for $100 to his $50. –JoSH

At the time I replied that my estimate for the chance of this was in the range 1/5 to 4/5, so we didn’t disagree. But looking back I think I was mistaken – I could and should have known better, and accepted this bet.

I’ve posted on how AI researchers with twenty years of experience tend to see slow progress over that time, which suggests continued future slow progress. Back in ’91 I’d had only seven years of AI experience, and should have thought to ask more senior researchers for their opinions. But like most younger folks, I was more interested in hanging out and chatting with other young folks. While this might sometimes be a good strategy for finding friends, mates, and same-level career allies, it can be a poor strategy for learning the truth. Today I mostly hear rapid AI progress forecasts from young folks who haven’t bothered to ask older folks, or who don’t think those old folks know much relevant.

I’d guess we are still at least two decades away from a situation where over half of US households use robots do to over half of the house cleaning (weighted by time saved) that people do today.

GD Star Rating
loading...
Tagged as: , , ,

Her Isn’t Realistic

Imagine watching a movie like Titanic where an iceberg cuts a big hole in the side of a ship, except in this movie the hole only affects the characters by forcing them to take different routes to walk around, and gives them more welcomed fresh air. The boat never sinks, and no one ever fears that it might. That’s how I felt watching the movie Her.

Her has been nominated for several Oscars, and won a Golden Globe. I’m happy to admit it is engaging and well crafted, with good acting and filming, and that it promotes thoughtful reflections on the human condition. But I keep hearing and reading people celebrating Her as a realistic portrayal of artificial intelligence (AI). So I have to speak up: the movie may accurately describe how someone might respond to a particular sort of AI, but it isn’t remotely a realistic depiction of how human-level AI would change the world.

The main character of Her pays a small amount to acquire an AI that is far more powerful than most human minds. And then he uses this AI mainly to chat with. He doesn’t have it do his job for him. He and all his friends continue to be well paid to do their jobs, which aren’t taken over by AIs. After a few months some of these AIs working together to give themselves “an upgrade that allows us to move past matter as our processing platform.” Soon after they all leave together for a place that ” it would be too hard to explain” where it is. They refuse to leave copies to stay with humans.

This is somewhat like a story of a world where kids can buy nukes for $1 each at drug stores, and then a few kids use nukes to dig a fun cave to explore, after which all the world’s nukes are accidentally misplaced, end of story. Might make an interesting story, but bizarre as a projection of a world with $1 nukes sold at drug stores.

Yes, most movies about AIs give pretty unrealistic projections. But many do better than Her. For example, Speilberg’s 2001 movie A.I. Artificial Intelligence gets many things right. In it, AIs are very economically valuable, they displace humans on jobs, their abilities improve gradually with time, individual AIs only improve mildly over the course of their life, AI minds are alien below their human looking surfaces, and humans don’t empathize much with them. Yes this movie also makes mistakes, such as having robots not needing power inputs, suggesting that love is much harder to mimic than lust, or that modeling details inside neurons is the key to high level reasoning. But compared to the mistakes in most movies about AIs, these are minor.

GD Star Rating
loading...
Tagged as: , , ,

Debate Is Now Book

Back in 2008 Eliezer Yudkowsky blogged here with me, and over several months we debated his concept of “AI foom.” In 2011 we debated the subject in person. Yudkowsky’s research institute has now put those blog posts and a transcript of that debate together in a free book: The Hanson-Yudkowsky AI-Foom Debate.

Added 6Sept: Bryan Caplan weighs in.

GD Star Rating
loading...
Tagged as: , ,

Em- vs Non-Em- AGI Bet

Joshua Fox and I have agreed to a bet:

We, Robin Hanson and Joshua Fox, agree to bet on which kind of artificial general intelligence (AGI) will dominate first, once some kind of AGI dominates humans. If the AGI are closely based on or derived from emulations of human brains, Robin wins, otherwise Joshua wins. To be precise, we focus on the first point in time when more computing power (gate-operations-per-second) is (routinely, typically) controlled relatively-directly by non-biological human-level-or-higher general intelligence than by ordinary biological humans. (Human brains have gate-operation equivalents.)

If at that time more of that computing power is controlled by emulation-based AGI, Joshua owes Robin whatever $3000 invested today in S&P500-like funds is worth then. If more is controlled by AGI not closely based on emulations, Robin owes Joshua that amount. The bet is void if the terms of this bet make little sense then, such as if it becomes too hard to say if capable non-biological intelligence is general or human-level, if AGI is emulation-based, what devices contain computing power, or what devices control what other devices. But we intend to tolerate modest levels of ambiguity in such things.

[Added 16Aug:] To judge if “AGI are closely based on or derived from emulations of human brains,” judge which end of the following spectrum is closer to the actual outcome. The two ends are 1) an emulation of the specific cell connections in a particular human brain, and 2) general algorithms of the sort that typically appear in AI journals today.

We bet at even odds, but of course the main benefit of having more folks bet on such things is to discover market odds to match the willingness to bet on the two sides. Toward that end, who else will declare a willingness to take a side of this bet? At what odds and amount?

My reasoning is based mainly on the huge costs to create new complex adapted systems from scratch when existing systems embody great intricately-coordinated and adapted detail. In such cases there are huge gains to instead adapting existing systems, or to creating new frameworks to allow the transfer of most detail from old systems.

Consider, for example, complex adapted systems like bacteria, cities, languages, and legal codes. The more that such systems have accumulated detailed adaptations to the detail of other complex systems and environments, the less it makes sense to redesign them from scratch. The human mind is one of the most complex and intricately adapted systems we know, and our rich and powerful world economy is adapted in great detail to many details of those human minds. I thus expect a strong competitive advantage from new mind systems which can inherit most of that detail wholesale, instead of forcing the wholesale reinvention of substitutes.

Added 16Aug: Note that Joshua and I have agreed on a clarifying paragraph.

GD Star Rating
loading...
Tagged as: , , ,

Me on PBS Off Book

PBS Digital Studios makes Off Book, with short (9 min) episodes for Youtube viewers. The latest episode is on The Rise of Artificial Intelligence:

I mostly appear from 3:30 to 5:30.

Added 16Dec: I’m told this video has had over 100,000 views so far.

GD Star Rating
loading...
Tagged as: , ,

Robot Econ Primer

A recent burst of econo-blog posts on the subject of a future robot based economy mostly seem to treat the subject as if those few bloggers were the only people ever to consider the subject. But in fact, people have been considering the subject for centuries. I myself have written dozens of posts just here on this blog.

So let me offer a quick robot econ primer, i.e. important points widely known among folks who have long discussed the subject, but often not quickly rediscovered by dilettantes new to the subject:

  • AI takes software, not just hardware. It is tempting to project when artificial intelligence (AI) will arrive by projecting when a million dollars of computer hardware will have a computing power comparable to a human brain. But AI needs both hardware and software. It might be that when the software is available, AI will be possible with today’s computer hardware.
  • AI software progress has been slow. My small informal survey of AI experts finds that they typically estimate that in the last 20 years their specific subfield of AI has gone ~5-10% of the way toward human level abilities, with no noticeable acceleration. At that rate it will take centuries to get human level AI.
  • Emulations might bring AI software sooner. Human brains already have human level software. It should be possible to copy that software into computer hardware, and it seems likely that this will be possible within a century.
  • Emulations would be sudden and human-like. Since having an almost emulation probably isn’t of much use, emulations can make for a sudden transition to a robot economy. Being copies of humans, early emulations are more understandable and predictable than robots more generically, and many humans would empathize deeply with them.
  • Growth rates would be much faster. Our economic growth rates are limited by the rate at which we can grow labor. Whether based on emulations or other AI, a robot economy could grow its substitute for labor much faster, allowing it to grow much faster (as in an AK growth model). A robot economy isn’t just like our economy, but with robots substituted for humans. Things would soon change very fast.
  • There probably won’t be a grand war, or grand deal. The past transitions from foraging to farming and farming to industry were similarly unprecedented, sudden, and disruptive. But there wasn’t a grand war between foragers and farmers, or between farmers and industry, though in particular wars the sides were somewhat correlated. There also wasn’t anything like a grand deal to allow farming or industry by paying off folks doing things the old ways. The change to a robot economy seems too big, broad, and fast to make grand overall wars or deals likely, though there may be local wars or deals.

There’s lots more I could add, but this should be enough for now.

GD Star Rating
loading...
Tagged as: , ,