Tag Archives: AI

I Was Wrong

On Jan 7, 1991 Josh Storrs Hall made this offer to me on the Nanotech email list:

I hereby offer Robin Hanson (only) 2-to-1 odds on the following statement:
“There will, by 1 January 2010, exist a robotic system capable of the cleaning an ordinary house (by which I mean the same job my current cleaning service does, namely vacuum, dust, and scrub the bathroom fixtures). This system will not employ any direct copy of any individual human brain. Furthermore, the copying of a living human brain, neuron for neuron, synapse for synapse, into any synthetic computing medium, successfully operating afterwards and meeting objective criteria for the continuity of personality, consciousness, and memory, will not have been done by that date.”
Since I am not a bookie, this is a private offer for Robin only, and is only good for $100 to his $50. –JoSH

At the time I replied that my estimate for the chance of this was in the range 1/5 to 4/5, so we didn’t disagree. But looking back I think I was mistaken – I could and should have known better, and accepted this bet.

I’ve posted on how AI researchers with twenty years of experience tend to see slow progress over that time, which suggests continued future slow progress. Back in ’91 I’d had only seven years of AI experience, and should have thought to ask more senior researchers for their opinions. But like most younger folks, I was more interested in hanging out and chatting with other young folks. While this might sometimes be a good strategy for finding friends, mates, and same-level career allies, it can be a poor strategy for learning the truth. Today I mostly hear rapid AI progress forecasts from young folks who haven’t bothered to ask older folks, or who don’t think those old folks know much relevant.

I’d guess we are still at least two decades away from a situation where over half of US households use robots do to over half of the house cleaning (weighted by time saved) that people do today.

GD Star Rating
loading...
Tagged as: , , ,

Her Isn’t Realistic

Imagine watching a movie like Titanic where an iceberg cuts a big hole in the side of a ship, except in this movie the hole only affects the characters by forcing them to take different routes to walk around, and gives them more welcomed fresh air. The boat never sinks, and no one ever fears that it might. That’s how I felt watching the movie Her.

Her has been nominated for several Oscars, and won a Golden Globe. I’m happy to admit it is engaging and well crafted, with good acting and filming, and that it promotes thoughtful reflections on the human condition. But I keep hearing and reading people celebrating Her as a realistic portrayal of artificial intelligence (AI). So I have to speak up: the movie may accurately describe how someone might respond to a particular sort of AI, but it isn’t remotely a realistic depiction of how human-level AI would change the world.

The main character of Her pays a small amount to acquire an AI that is far more powerful than most human minds. And then he uses this AI mainly to chat with. He doesn’t have it do his job for him. He and all his friends continue to be well paid to do their jobs, which aren’t taken over by AIs. After a few months some of these AIs working together to give themselves “an upgrade that allows us to move past matter as our processing platform.” Soon after they all leave together for a place that ” it would be too hard to explain” where it is. They refuse to leave copies to stay with humans.

This is somewhat like a story of a world where kids can buy nukes for $1 each at drug stores, and then a few kids use nukes to dig a fun cave to explore, after which all the world’s nukes are accidentally misplaced, end of story. Might make an interesting story, but bizarre as a projection of a world with $1 nukes sold at drug stores.

Yes, most movies about AIs give pretty unrealistic projections. But many do better than Her. For example, Speilberg’s 2001 movie A.I. Artificial Intelligence gets many things right. In it, AIs are very economically valuable, they displace humans on jobs, their abilities improve gradually with time, individual AIs only improve mildly over the course of their life, AI minds are alien below their human looking surfaces, and humans don’t empathize much with them. Yes this movie also makes mistakes, such as having robots not needing power inputs, suggesting that love is much harder to mimic than lust, or that modeling details inside neurons is the key to high level reasoning. But compared to the mistakes in most movies about AIs, these are minor.

GD Star Rating
loading...
Tagged as: , , ,

Debate Is Now Book

Back in 2008 Eliezer Yudkowsky blogged here with me, and over several months we debated his concept of “AI foom.” In 2011 we debated the subject in person. Yudkowsky’s research institute has now put those blog posts and a transcript of that debate together in a free book: The Hanson-Yudkowsky AI-Foom Debate.

Added 6Sept: Bryan Caplan weighs in.

GD Star Rating
loading...
Tagged as: , ,

Em- vs Non-Em- AGI Bet

Joshua Fox and I have agreed to a bet:

We, Robin Hanson and Joshua Fox, agree to bet on which kind of artificial general intelligence (AGI) will dominate first, once some kind of AGI dominates humans. If the AGI are closely based on or derived from emulations of human brains, Robin wins, otherwise Joshua wins. To be precise, we focus on the first point in time when more computing power (gate-operations-per-second) is (routinely, typically) controlled relatively-directly by non-biological human-level-or-higher general intelligence than by ordinary biological humans. (Human brains have gate-operation equivalents.)

If at that time more of that computing power is controlled by emulation-based AGI, Joshua owes Robin whatever $3000 invested today in S&P500-like funds is worth then. If more is controlled by AGI not closely based on emulations, Robin owes Joshua that amount. The bet is void if the terms of this bet make little sense then, such as if it becomes too hard to say if capable non-biological intelligence is general or human-level, if AGI is emulation-based, what devices contain computing power, or what devices control what other devices. But we intend to tolerate modest levels of ambiguity in such things.

[Added 16Aug:] To judge if “AGI are closely based on or derived from emulations of human brains,” judge which end of the following spectrum is closer to the actual outcome. The two ends are 1) an emulation of the specific cell connections in a particular human brain, and 2) general algorithms of the sort that typically appear in AI journals today.

We bet at even odds, but of course the main benefit of having more folks bet on such things is to discover market odds to match the willingness to bet on the two sides. Toward that end, who else will declare a willingness to take a side of this bet? At what odds and amount?

My reasoning is based mainly on the huge costs to create new complex adapted systems from scratch when existing systems embody great intricately-coordinated and adapted detail. In such cases there are huge gains to instead adapting existing systems, or to creating new frameworks to allow the transfer of most detail from old systems.

Consider, for example, complex adapted systems like bacteria, cities, languages, and legal codes. The more that such systems have accumulated detailed adaptations to the detail of other complex systems and environments, the less it makes sense to redesign them from scratch. The human mind is one of the most complex and intricately adapted systems we know, and our rich and powerful world economy is adapted in great detail to many details of those human minds. I thus expect a strong competitive advantage from new mind systems which can inherit most of that detail wholesale, instead of forcing the wholesale reinvention of substitutes.

Added 16Aug: Note that Joshua and I have agreed on a clarifying paragraph.

GD Star Rating
loading...
Tagged as: , , ,

Me on PBS Off Book

PBS Digital Studios makes Off Book, with short (9 min) episodes for Youtube viewers. The latest episode is on The Rise of Artificial Intelligence:

I mostly appear from 3:30 to 5:30.

Added 16Dec: I’m told this video has had over 100,000 views so far.

GD Star Rating
loading...
Tagged as: , ,

Robot Econ Primer

A recent burst of econo-blog posts on the subject of a future robot based economy mostly seem to treat the subject as if those few bloggers were the only people ever to consider the subject. But in fact, people have been considering the subject for centuries. I myself have written dozens of posts just here on this blog.

So let me offer a quick robot econ primer, i.e. important points widely known among folks who have long discussed the subject, but often not quickly rediscovered by dilettantes new to the subject:

  • AI takes software, not just hardware. It is tempting to project when artificial intelligence (AI) will arrive by projecting when a million dollars of computer hardware will have a computing power comparable to a human brain. But AI needs both hardware and software. It might be that when the software is available, AI will be possible with today’s computer hardware.
  • AI software progress has been slow. My small informal survey of AI experts finds that they typically estimate that in the last 20 years their specific subfield of AI has gone ~5-10% of the way toward human level abilities, with no noticeable acceleration. At that rate it will take centuries to get human level AI.
  • Emulations might bring AI software sooner. Human brains already have human level software. It should be possible to copy that software into computer hardware, and it seems likely that this will be possible within a century.
  • Emulations would be sudden and human-like. Since having an almost emulation probably isn’t of much use, emulations can make for a sudden transition to a robot economy. Being copies of humans, early emulations are more understandable and predictable than robots more generically, and many humans would empathize deeply with them.
  • Growth rates would be much faster. Our economic growth rates are limited by the rate at which we can grow labor. Whether based on emulations or other AI, a robot economy could grow its substitute for labor much faster, allowing it to grow much faster (as in an AK growth model). A robot economy isn’t just like our economy, but with robots substituted for humans. Things would soon change very fast.
  • There probably won’t be a grand war, or grand deal. The past transitions from foraging to farming and farming to industry were similarly unprecedented, sudden, and disruptive. But there wasn’t a grand war between foragers and farmers, or between farmers and industry, though in particular wars the sides were somewhat correlated. There also wasn’t anything like a grand deal to allow farming or industry by paying off folks doing things the old ways. The change to a robot economy seems too big, broad, and fast to make grand overall wars or deals likely, though there may be local wars or deals.

There’s lots more I could add, but this should be enough for now.

GD Star Rating
loading...
Tagged as: , ,

Foom Debate, Again

My ex-co-blogger Eliezer Yudkowsky last June:

I worry about conversations that go into “But X is like Y, which does Z, so X should do reinterpreted-Z”. Usually, in my experience, that goes into what I call “reference class tennis” or “I’m taking my reference class and going home”. The trouble is that there’s an unlimited number of possible analogies and reference classes, and everyone has a different one. I was just browsing old LW posts today (to find a URL of a quick summary of why group-selection arguments don’t work in mammals) and ran across a quotation from Perry Metzger to the effect that so long as the laws of physics apply, there will always be evolution, hence nature red in tooth and claw will continue into the future – to him, the obvious analogy for the advent of AI was “nature red in tooth and claw”, and people who see things this way tend to want to cling to that analogy even if you delve into some basic evolutionary biology with math to show how much it isn’t like intelligent design. For Robin Hanson, the one true analogy is to the industrial revolution and farming revolutions, meaning that there will be lots of AIs in a highly competitive economic situation with standards of living tending toward the bare minimum, and this is so absolutely inevitable and consonant with The Way Things Should Be as to not be worth fighting at all. That’s his one true analogy and I’ve never been able to persuade him otherwise. For Kurzweil, the fact that many different things proceed at a Moore’s Law rate to the benefit of humanity means that all these things are destined to continue and converge into the future, also to the benefit of humanity. For him, “things that go by Moore’s Law” is his favorite reference class.

I can have a back-and-forth conversation with Nick Bostrom, who looks much more favorably on Oracle AI in general than I do, because we’re not playing reference class tennis with “But surely that will be just like all the previous X-in-my-favorite-reference-class”, nor saying, “But surely this is the inevitable trend of technology”; instead we lay out particular, “Suppose we do this?” and try to discuss how it will work, not with any added language about how surely anyone will do it that way, or how it’s got to be like Z because all previous Y were like Z, etcetera. (more)

When we shared this blog, Eliezer and I had a long debate here on his “AI foom” claims. Later, we debated in person once. (See also slides 34,35 of this 3yr-old talk.) I don’t accept the above as characterizing my position well. I’ve written up a summaries before, but let me try again, this time trying to more directly address the above critique.

Eliezer basically claims that the ability of an AI to change its own mental architecture is such a potent advantage as to make it likely that a cheap unnoticed and initially low ability AI (a mere “small project machine in a basement”) could without warning over a short time (e.g., a weekend) become so powerful as to be able to take over the world.

As this would be a sudden big sustainable increase in the overall growth rate in the broad capacity of the world economy, I do find it useful to compare to compare this hypothesized future event to the other pasts events that produce similar outcomes, namely a big sudden sustainable global broad capacity rate increase. The last three were the transitions to humans, farming, and industry.

I don’t claim there is some hidden natural law requiring such events to have the same causal factors or structure, or to appear at particular times. But I do think these events suggest a useful if weak data-driven prior on the kinds of factors likely to induce such events, on the rate at which they occur, and on their accompanying inequality in gains. In particular, they tell us that such events are very rare, that over the last three events gains have been spread increasingly equally, and that these three events seem mainly due to better ways to share innovations.

Eliezer sees the essence of his scenario as being a change in the “basic” architecture of the world’s best optimization process, and he sees the main prior examples of this as the origin of natural selection and the arrival of humans. He also sees his scenario as differing enough from the other studied growth scenarios as to make analogies to them of little use.

However, since most global bio or econ growth processes can be thought of as optimization processes, this comes down to his judgement on what counts as a “basic” structure change, and on how different such scenarios are from other scenarios. And in my judgement the right place to get and hone our intuitions about such things is our academic literature on global growth processes.

Economists have a big literature on processes by which large economies grow, increasing our overall capacities to achieve all the things we value. There are of course many other growth literatures, and some of these deal in growths of capacities, but these usually deal with far more limited systems. Of these many growth literatures it is the economic growth literature that is closest to dealing with the broad capability growth posited in a fast growing AI scenario.

It is this rich literature that seems to me the right place to find and hone our categories for thinking about growing broadly capable systems. One should review many formal theoretical models, and many less formal applications of such models to particular empirical contexts, collecting “data” points of what is thought to increase or decrease growth of what in what contexts, and collecting useful categories for organizing such data points.

With such useful categories in hand one can then go into a new scenario such as AI foom and have a reasonable basis for saying how similar that new scenario seems to old scenarios, which old scenarios it seems most like if any, and which parts of that new scenario are central vs. peripheral. Yes of course if this new area became mature it could also influence how we think about other scenarios.

But until we actually see substantial AI self-growth, most of the conceptual influence should go the other way. Relying instead primarily on newly made up categories and similarity maps between them, concepts and maps which have not been vetted or honed in dealing with real problems, seems to me a mistake. Yes of course a new problem may require one to introduce some new concepts to describe it, but that is hardly the same as largely ignoring old concepts.

So, I fully grant that the ability of AIs to intentionally change mind designs would be a new factor in the world, and it could make a difference for AI ability to self-improve. But while the history of growth over the last few million years has seen many dozens of factors come and go, or increase and decrease in importance, it has only seen three events in which overall growth rates greatly increased suddenly and sustainably. So the mere addition of one more factor seems unlikely to generate foom, unless our relevant categories for growth causing factors suggest that this factor is unusually likely to have such an effect.

This is the sense in which I long ago warned against over-reliance on “unvetted” abstractions. I wasn’t at all trying to claim there is one true analogy and all others are false. Instead, I argue for preferring to rely on abstractions, including categories and similarity maps, that have been found useful by a substantial intellectual community working on related problems. On the subject of an AI growth foom, most of those abstractions should come from the field of economic growth.

GD Star Rating
loading...
Tagged as: , , , ,

Wanted: Elite Crowds

This weekend I was in a AAAI (Association for the Advancement of Artificial Intelligence) Fall Symposium on Machine Aggregation of Human Judgment. It was my job to give a short summary about our symposium to the eight co-located symposia. Here is what I said.

In most of AI, data is input, and judgements are output. But here humans turn data into judgements, and then machines and institutions combine those judgements. This work is often inspired by a “wisdom of crowds” idea that we often rely too much on arrogant over-rated experts instead of the under-rated insight of everyone else. Boo elites; rah ordinary folks!

Many of the symposium folks are part of the IARPA ACE project, which is structured as a competition between four teams, each of which must collect several hundred participants to answer the same real-time intelligence questions, with roughly a hundred active questions at any one time. Each team uses a different approach. The two most common ways are to ask many people for estimates, and then average them somehow, or to have people trade in speculative betting markets. ACE is now in its second of four years. So, what have we learned?

First, we’ve learned that it helps to transform probability estimates into log-odds before averaging them. Weights can then correct well for predictable over- or under-confidence. We’ve also learned better ways to elicit estimates. For example, instead of asking for a 90% confidence interval on a number, it is better to ask for an interval, and then for a probability. It works even better to ask about an interval someone else picked. Also, instead of asking people directly for their confidence, it is better to ask them how much their opinion would change if they knew what others know.

Our DAGGRE team is trying to improve accuracy by breaking down questions into a set of related correlated questions. ACE has also learned how to make people better at estimating, both by training them in basic probability theory, and by having them work together in teams.

But the biggest thing we’ve learned is that people are unequal – the best way to get good crowd wisdom is to have a good crowd. Contributions that most improve accuracy are more extreme, more recent, by those who contribute more often, and come with more confidence. In our DAGGRE system, most value comes from a few dozen of our thousands of participants. True, these elites might not be the same folks you’d have picked via resumes, and tracking success may give better incentives. But still, what we’ve most learned about the wisdom of crowds is that it is best to have an elite “crowd.”

GD Star Rating
loading...
Tagged as: , ,

Miller’s Singularity Rising

James Miller, who posted once here at OB, has a new book, Singularity Rising, out Oct 2. I’ve read an advance copy. Here are my various reactions to the book.

Miller discusses several possible paths to super-intelligence, but never says which paths he thinks likely, nor when any might happen. However, he is confident that one will happen eventually, he calls Kurzweil’s 2045 forecast “robust”, and he offers readers personal advice as if something will happen in their lifetimes.

I get a lot of coverage in chapter 13, which discusses whole brain emulations. (And Katja is mentioned on pp.213-214.) While Miller focuses mostly on what emulations imply for humans, he does note that many ems could die from poverty or obsolescence. He make no overall judgement on the scenario, however, other than to once use the word “dystopian.”

While Miller’s discussion of emulations is entirely of the scenario of a large economy containing many emulations, his discussion of non-emulation AI is entirely of the scenario of a single “ultra AI”. He never considers a single ultra emulation, nor an economy of many AIs. Nor does he explain these choices.

On ultra AIs, Miller considers only an “intelligence explosion” scenario where a human level AI turns itself into an ultra AI “in a period of weeks, days, or even hours.” His arguments for this extremely short timescale are:

  1. Self-reproducing nanotech factories might double every hour,
  2. On a scale of all possible minds, a chimp isn’t far from von Neuman in intelligence, and
  3. Evolution has trouble coordinating changes, but an AI could use brain materials and structures that evolution couldn’t.

I’ve said before that I don’t see how these imply a weeks timescale for one human level AI to make itself more powerful than the entire rest of the world put together. Miller explains my skepticism:

As Hanson told me, the implausibility of some James Bond villains illustrates a reason to be skeptical of an intelligence explosion. A few of these villains had their own private islands on which they created new powerful weapons. But weapons development is a time and resource intensive task, making it extremely unlikely that the villains small team of followers could out-innovate all of the weapons developers in the rest of the world by producing spectacularly destructive instruments that no other military force possessed. Thinking that a few henchmen, even if led by an evil genius, would do a better job at weapons development than a major defense contractor is as silly as believing that the professor on Gilligan’s Island really could have created his own coconut based technology. …

Think of an innovation race between a single AI and the entirety of mankind. For an intelligence explosion to occur, the AI has to not only win the race, but finish before humanity completes its next stride. A sufficiently smart AI could certainly do this, but an AI only a bit brighter than von Neumann would not have the slightest chance of achieving this margin of victory. (pp.215-216)

As you can tell from this quotation, Miller’s book often reads like the economics textbook he wrote. He is usually content to be a tutor, explaining common positions and intuitions behind common arguments. He does, however, explain some of his personal contributions to this field, such as his argument that preventing the destruction of the world can be a public good undersupplied by private firms, and that development might slow down just before an anticipated explosion, if investors think non-investors will gain or lose just as much as investors from the change.

I’m not sure this book has much of a chance to get very popular. The competition is fierce, Miller isn’t already famous, and while his writing quality is good, it isn’t at the popular blockbuster popular book level. But I wish his book all the success it can muster.

GD Star Rating
loading...
Tagged as: , ,

AI Progress Estimate

From ’85 to ’93 I was an AI researcher, first at Lockheed AI Center, then at the NASA Ames AI group. In ’91 I presented at IJCAI, the main international AI conference, on a probability related paper. Back then this was radical – one questioner at my talk asked “How can this be AI, since it uses math?” Probability specialists created their own AI conference UAI, to have a place to publish.

Today probability math is well accepted in AI. The long AI battle between the neats and scruffs was won handily by the neats – math and theory are very accepted today. UAI is still around though, and a week ago I presented another probability related paper there (slides, audio), on our combo prediction market algorithm. And listening to all the others talks at the conference let me reflect on the state of the field, and its progress in the last 21 years.

Overall I can’t complain much about emphasis. I saw roughly the right mix of theory vs. application, of general vs. specific results, etc. I doubt the field would progress more than a factor of two faster if such parameters were exactly optimized. The most impressive demo I saw was Video In Sentences Out, an end-to-end integrated system for writing text summaries of simple videos. Their final test stats:

Human judges rated each video-sentence pair to assess whether the sentence was true of the video and whether it described a salient event depicted in that video. 26.7% (601/2247) of the video-sentence pairs were deemed to be true and 7.9% (178/2247) of the video-sentence pairs were deemed to be salient.

This is actually pretty impressive, once you understand just how hard the problem is. Yes, we have a long way to go, but are making steady progress.

So how far have we come in last twenty years, compared to how far we have to go to reach human level abilities? I’d guess that relative to the starting point of our abilities of twenty years ago, we’ve come about 5-10% of the distance toward human level abilities. At least in probability-related areas, which I’ve known best. I’d also say there hasn’t been noticeable acceleration over that time. Over a thirty year period, it is even fair to say there has been deceleration, since Pearl’s classic ’88 book was such a big advance.

I asked a few other folks at UAI who had been in the field for twenty years to estimate the same things, and they roughly agreed – about 5-10% of the distance has been covered in that time, without noticeable acceleration. It would be useful to survey senior experts in other areas of AI, to get related estimates for their areas. If this 5-10% estimate is typical, as I suspect it is, then an outside view calculation suggests we probably have at least a century to go, and maybe a great many centuries, at current rates of progress.

Added 21Oct: At the recent Singularity Summit, I asked speaker Melanie Mitchell to estimate how far we’ve come in her field of analogical reasoning in the last twenty years. She estimated 5 percent of the way to human level abilities, with no noticeable acceleration.

Added 11Dec: At the Artificial General Intelligence conference, Murray Shanahan says that looking at his twenty years experience in the knowledge representation field, he estimates we have come 10% of the way, with no noticeable acceleration.

Added 4Oct’13: At an NSF workshop on social computing, Wendy Hall said that in her twenty years in computer-assisted training, we’ve moved less than 1% of the way to human level abilities. Claire Cardie said that in her twenty years in natural language processing, we’ve come 20% of the way. Boi Faltings says that in his field of solving constraint satisfaction problems, they were past human level abilities twenty years ago, and are even further past that today.

Let me clarify that I mean to ask people about progress in a field of AI as it was conceived twenty years ago. Looking backward one can define areas in which we’ve made great progress. But to avoid selection biases, I want my survey to focus on areas as they were defined back then.

GD Star Rating
loading...
Tagged as: , ,