This AI Boom Will Also Bust

Imagine an innovation in pipes. If this innovation were general, something that made all kinds of pipes cheaper to build and maintain, the total benefits could be large, perhaps even comparable to the total amount we spend on pipes today. (Or even much larger.) And if most of the value of pipe use were in many small uses, then that is where most of these economic gains would be found.

In contrast, consider an innovation that only improved the very largest pipes. This innovation might, for example, cost a lot to use per meter of pipe, and so only make sense for the largest pipes. Such an innovation might make for very dramatic demonstrations, with huge vivid pipes, and so get media coverage. But the total economic gains here will probably be smaller; as most of pipe value is found in small pipes, gains to the few biggest pipes can only do so much.

Now consider my most viral tweet so far:

This got almost universal agreement from those who see such issues play out behind the scenes. And by analogy with the pipe innovation case, this fact tells us something about the potential near-term economic impact of recent innovations in Machine Learning. Let me explain.

Most firms have piles of data they aren’t doing much with, and far more data that they could collect at a modest cost. Sometimes they use some of this data to predict a few things of interest. Sometimes this creates substantial business value. Most of this value is achieved, as usual, in the simplest applications, where simple prediction methods are applied to simple small datasets. And the total value achieved is only a small fraction of the world economy, at least as measured by income received by workers and firms who specialize in predicting from data.

Many obstacles limit such applications. For example, the value of better predictions for related decisions may be low, data may be in a form poorly suited to informing predictions, making good use of predictions might require larger reorganizations, and organizations that hold parts of the data may not want to lose control of that data. Available personnel may lack sufficient skills to apply the most effective approaches for data cleaning, merging, analysis, and application.

No doubt many errors are made in choices of when to analyze what data how much and by whom. Sometimes they will do too much prediction, and sometimes too little. When tech changes, orgs will sometimes wait too long to try new tech, and sometimes will not wait long enough for tech to mature. But in ordinary times, when the relevant technologies improve at steady known rates, we have no strong reason to expect these choices to be greatly wrong on average.

In the last few years, new “deep machine learning” prediction methods are “hot.” In some widely publicized demonstrations, they seem to allow substantially more accurate predictions from data. Since they shine more when data is plentiful, and they need more skilled personnel, these methods are most promising for the largest prediction problems. Because of this new fashion, at many firms those who don’t understand these issues well are pushing subordinates to seek local applications of these new methods. Those subordinates comply, at least in appearance, in part to help they and their organization appear more skilled.

One result of this new fashion is that a few big new applications are being explored, in places with enough data and potential prediction value to make them decent candidates. But another result is the one described in my tweet above: fashion-induced overuse of more expensive new methods on smaller problems to which they are poorly matched. We should expect this second result to produce a net loss on average. The size of this loss could be enough to outweigh all the gains from the few big new applications; after all, most value is usually achieved in many small problems.

But I don’t want to draw a conclusion here about the net gain or loss. I instead want to consider the potential for this new prediction tech to have an overwhelming impact on the world economy. Some see this new fashion as just first swell of a tsunami that will soon swallow the world. For example, in 2013 Frey and Osborne famously estimated:

About 47 percent of total US employment is at risk .. to computerisation .. perhaps over the next decade or two.

If new prediction techs induced a change that big, they would be creating a value that is a substantial fraction of the world economy, and so consume a similar fraction of world income. If so, the prediction industry would in a short time become vastly larger than it is today. If today’s fashion were the start of that vast growth, we should not only see an increase in prediction activity, we should also see an awe-inspiring rate of success within that activity. The application of these new methods should be enabling huge new revenue streams, across a very wide range of possible application areas. (Added: And the prospect of that should be increasing stock values in this area far more than we’ve seen.)

But I instead hear that within the areas where most prediction value lies, most attempts to apply this new tech actually produce less net value than would be achieved with old tech. I hear that prediction analysis tech is usually not the most important part the process, and that recently obsession with showing proficiency in this new analysis tech has led to neglect of the more important and basic issues of thinking carefully about what you might want to predict with what data, and then carefully cleaning and merging your data into a more useful form.

Yes, there must be exceptions, and some of those may be big. So a few big applications may enable big value. And self-driving cars seem a plausible candidate, a case where prediction is ready to give large value, high enough to justify using the most advanced prediction tech, and where lots of the right sort of data is available. But even if self-driving vehicles displace most drivers within a few decades, that rate of job automation wouldn’t be out of the range of our historical record of job automation. So it wouldn’t show that “this time is different.” To be clearly out of that range, we’d need another ten jobs that big also displaced in the same period. And even that isn’t enough to automate half of all jobs in two decades.

The bottom line here is that while some see this new prediction tech as like a new pipe tech that could improve all pipes, no matter their size, it is actually more like a tech only useful on very large pipes. Just as it would be a waste to force a pipe tech only useful for big pipes onto all pipes, it can be a waste to push advanced prediction tech onto typical prediction tasks. And the fact that this new tech is mainly only useful on rare big problems suggests that its total impact will be limited. It just isn’t the sort of thing that can remake the world economy in two decades. To the extend that the current boom is based on such grand homes, this boom must soon bust.

GD Star Rating
loading...
Tagged as: , , ,
Trackback URL:
  • http://praxtime.com/ Nathan Taylor (praxtime)

    Agree re big data angle. And agree that’s the correct reaction to your tweet.

    The one exception of this second boom in neural net/deep learning might be if NLP voice interface continues to improve as much in next 5 years as it has in last 5 years. Not talking AGI of course. But narrow domain specific uses for NLP. So talking to a help desk, travel agent, bank balances, stuff like that. Where the biggest payoff might be in industries resistant to tech and nearly impossible to navigate due to the bureaucracy. So government (how to pay taxes and fill out 1040, fill out a form for DMV, apply for a visa, pay traffic tickets) and healthcare (get asked a series of questions about why you want to see the doctor, get first line recommended procedures/tests), K-12/college education, and of course shopping (Ok Google, what car should I buy?). We tend to see an order of magnitude jump in computer tech applications when a new interface/input method opens up new possibilities. So command line > GUI > smartphone touch > voice(?). Each a big jump in scale. All the old stuff still around and doing fine off course. Just the new interface option opens up new kinds of adoption since the new is so much easier to use. Siri was introduced in 2011. Alexa and Google Home seem levels above that now in 2016. Another 5 years of grinding slog progress at the current rate could open up some large doors.

    If this happens, I’d argue your post above will turn be judged incorrect by 2021. Though to be fair you’re focusing on big data machine learning applications, and from that angle I completely agree. So not sure your position on (narrow/domain specific) voice interface either way.

    Nonetheless, I’d argue voice is a possible general purpose pipe improvement technology, even if it works only domain specific. I’d say slightly >50% chance of this happening. So my inner Bryan Caplan wants to bet you $20 (or some nominal amount of your choosing) if you could come up with a testable/measurable way to argue voice interface will plateau instead of continuing it’s current rate of progress, and not have a noticeable economic impact by 2021. Maybe just ramping up then, but definitely won’t be considered an AI winter if that happens. Or maybe an AI winter, but NLP still an economically important shift in tech, which I’d still count as a win. Alternately you could argue voice will happen like I say, but not have any big economic impact as it’ll turn out to be only marginally better than current interfaces. Which I’d agree would match your post.

    I would wager against either one if you can think up an agreeable way to quantify.

    • http://overcomingbias.com RobinHanson

      If NLP works as well as you hope, just how big a fraction of the world economy does it become? Larger than the self-driving vehicle industry?

      • http://praxtime.com/ Nathan Taylor (praxtime)

        As I mentioned on twitter, excellent point. NLP can be a huge success as an interface, yet world economic impact might not be so high. Starting with this Benedict Evans post, which shows scale of smartphones compared to cars.

        http://ben-evans.com/benedictevans/2016/4/29/the-end-of-a-mobile-wave

        My main argument would be you have to pick one: a) claim AI winter or AI bust coming, or b) NLP takes off as a primary interface driven by machine learning and continues to attract best engineering talent and massive investment. If b, then I claim a is not true. Now I think you are implying that both a and b can be true if NLP is a big success but does not have large measurable world economic impact. Have to think about this, and a solid point. But I would stand by my “pick one” claim. Maybe I can find some data or forecast to formalize this more. Will look at it today or tonight. Thanks for engaging. Fine post! Just not sure it’s correct in regards to NLP.

      • http://overcomingbias.com RobinHanson

        My main claim is about not exploding to automate half of jobs in two decades. But I’d also bet on a conventional AI bust – declining investment for a while.

      • http://praxtime.com/ Nathan Taylor (praxtime)

        Been researching a bit on how you might distinguish an AI bust. Don’t think the gartner hype cycle is predictive, but it is a useful post hoc tool for understanding what happened. And tech/VC investment often follows this pattern.
        https://en.wikipedia.org/wiki/Hype_cycle

        So hype cycle goes 1) tech trigger, 2) peak of inflated expectations, 3) trough of disillusionment, 4) slope of enlightenment, 5) plateau of productivity

        If this cycle is a baseline, then how about these three comparative variations off that baseline.

        A) Normal tech hype – baseline. Note that this baseline only works for a successful tech/product category. And that even in normal tech hype cycle investment decreases from top of hype.
        B) Normal tech hype – this is a variation where tech hype is for a failure. Basically dies at step 2.
        C) AI hype/winter. Historically for AI, the hype in step 2 is so high, and step 3 disillusionment so low, that investment just stops. So even when plateau of productivity hit, investment stops for 5 or 10 years.

        My point here is betting that tech investment will come down from current peak is not really an argument for AI winter. It’s arguing that ML is some sort of “normal” tech. So let me see if I can pull some investment/hype actual numbers. If so, maybe we could bet on how much ML will be like traditional AI winter profile, versus other sort of “normal” hype cycles in tech. If I can find this data, probably we can have a wager. As I suspect we differ enough that it’d be worth a bet.

      • hahvM

        I think businesses are starting to see their big, big data / AI investments not delivering the returns they’d hoped — e.g., that’s happening at Spotify, probably too X.ai.

        And fundamentally, data (and their related analyses #garbageInGarbageOut) are only good when they truly represent the object of interest; not when they *seem* to but actually don’t. And it seems as if the computer science glitterati running today’s AI show aren’t very good at unpacking those types of questions.

        This starts to give me the chills of at least an AI winter.

      • http://praxtime.com/ Nathan Taylor (praxtime)

        Ok. Reread your post two more times. Then looked up Wikipedia definition of AI Winter: “In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research.[1] The term was coined by analogy to the idea of a nuclear winter.[2] The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later.”
        https://en.wikipedia.org/wiki/AI_winter

        I see your claim is not quite AI Winter. You are saying in a few narrow areas, like self driving cars, the current round of tech may very well succeed. But the larger loopy claims for remaking the world economy are wrong. So we can add NLP to that list, and if economic impact similar to self driving cars, then you are correct in your post. So I guess we agree.

        That said….if self driving cars continue to take off, and NLP does as well, then we won’t have a formal AI winter. That is, funding won’t collapse. Yes, overhyped areas will fall away. But there will be enough areas to economically push and make viable the current tech. So in the end I guess I’m reacting to the title “AI Boom will also Bust” more than the argument in the post itself. If you want to argue for a funding collapse (formal definition of AI Winter), then I would still like some kind of wager if you are interested.

  • Elliot Olds

    I think referring to machine learning (ML) as “prediction tech” and speculating on the future size of the “prediction industry” implies that ML is much more specialized than it is. Sure, an ML system driving a car involves many predictions, but I wouldn’t call a self driving car a piece of prediction technology or a part of the prediction industry. Calling ML part of the “doing things industry” seems more accurate. The output of self driving cars are not predictions but completed tasks.

    • http://overcomingbias.com RobinHanson

      But isn’t the main reason that self-driving cars are feasible now and not twenty years ago is that prediction tech is better?

      • James McDermott

        I agree with Elliot. We use regression for a lot of tasks, including linear regression, but these tasks are not part of any “prediction industry”, as in “If so, the prediction industry would in a short time become vastly larger than it is today.”

        Suppose someone implements a reinforcement learning agent to automate parts of their gadget factory. Yes, there may well be a regression-type prediction in there to approximate the Q function or whatever it may be, but the “prediction industry” is not getting larger. Instead, the gadget industry is cutting costs by cutting jobs.

      • Elliot Olds

        Let’s assume that’s true.

        Part of your argument seems to be “the prediction industry is a small % of the economy, so improving prediction tech won’t be such a big deal.”

        Yet the example you give where ML looks like it’ll have a big effect is not in a “prediction industry” but in the transportation industry. You’ve apparently looked outside of the “prediction industry” in this case because self driving cars are a popular topic and are seemingly inevitable.

        So how do you know you’re not missing lots of other huge gains from ML by looking for gains in a “prediction industry” and not in the industries that ML would actually have effects in? (Such as manufacturing processes, legal research, food service, etc).

        The point is that even if the ML technique itself is ‘narrow’, it seems to unblock a bottleneck in a wide variety of things that are not part of a “prediction industry.” So making forecasts based on the size of the “prediction industry” will cause you to miss the vast majority of the value of ML.

      • Mark Bahner

        “But isn’t the main reason that self-driving cars are feasible now and not twenty years ago is that prediction tech is better?”

        There have massive improvements in many hardware aspects, as well software:

        1) The cost of 10 gigabyte hard drive 20 years ago was about $2000, and a 10 gigabyte flash drive would have cost far more than the price of a car. That’s if a 10 gigabyte flash drive was available 20 years ago…which it wasn’t. Now a 10 gigabyte flash drive is a couple of dollars.

        2) The cost of a 12 megapixel camera 20 years ago was about $15,000. Now it’s less than $100.

        3) The price per gigaflop of computing power 20 years ago was about $40,000. Now it’s less than 40 cents.

    • dbv

      A human driver predicts which way to turn the steering wheel or change gear or brake based on scene information coming at them through their eyes. A self-driving car is thousands of predictions per minute.

      A search engine using ML is predicts possible answers to a query. ML is prediction.

  • Paul Christiano

    You might be better able to sympathize with futurist AI enthusiasts if you thought of the [(really good RL) —> (agents learn how to think) —> (agents do general tasks as well as humans)] pathway in the same way you think of ems. Namely, (a) the value it produces has a similar (though not as extreme) prospect of being lumpy, and (b) it is mostly orthogonal to the usual process of codifying human knowledge and cognitive tools in software. For the young futurists most breathlessly excited about this round of AI, it is because they think that it is possible that this plan will work out.

    Presumably you would agree that (a) and (b) become true for suitably extreme “brute force search” approaches, e.g. literally rerunning evolution (since evolving a rat takes nearly as much time as evolving a human, and since redeveloping culture looks computationally much cheaper than rerunning evolution, so importing things from humans isn’t very important). It seems you should have some credence in contemporary deep learning being in that regime (Even if it is very unlikely to look anything like rerunning evolution. You might say that rerunning evolution is a ridiculous extreme, but it is worth pointing out that if Moore’s law continues and your long 200 year “AI” forecast is correct, then we will be able to cost-effectively simulate all of evolution long before we actually build AI.)

    Like you, my prior expectation would be for the usual process of codifying human knowledge and cognitive tools as software to carry the day. But I think that the last 60 years have seriously upset that expectation, mostly by the slow rate of progress. This makes it plausible that the a priori far-fetched emulation strategy should succeed, but it also makes it more plausible that the a priori far-fetched brute force search strategy should succeed.

    Unlike you, I am not so happy to generalize this slow progress to anything labelled “AI.” It seems you do not generalize from software to ems because you have a good enough understanding of how to build ems that you see them as not-usefully-analogous. I think that futurists excited about the current wave in ML feel similarly about software vs. ML.

    • http://overcomingbias.com RobinHanson

      I don’t understand your points a & b. I see myself as generalizing from the actual experience with the current wave of ML as used in real firms today, not from assuming it is like prior software. Are you suggesting that all is irrelevant, because ML might create its own new economy/ecosystem that doesn’t rely on existing firms?

      • Paul Christiano

        In general, you are happy to make em timeline forecasts that are unrelated to rates of software progress (e.g. your “200 years” AI timelines), even though firms would adopt ems and so in some sense they are contiguous with the current use of software by firms. This seems obviously correct to me. I am pointing out that people who expect human-level AI soon have a similar view about human-level reinforcement learning, and I think they are also correct.

        They may or may not be right about the timeline for the technology, but they are right to ignore the kind of argument in this post, and to use completely different arguments to try to figure out whether ML infrastructure is undervalued (whether the “bubble will burst”). Similarly, you would use completely different arguments to figure out whether technology for brain-scanning is undervalued, rather than trying to extrapolate from the way that microscopes are used in real firms today. (Of course, if microscopes were working well for firms that wanted to look at small things, you would use this as helpful data to estimate the rate of progress in microscopy, just as human-level AI enthusiasts consider successful applications of deep learning to be evidence about its efficacy.) It’s totally irrelevant whether ems will be adopted by existing firms or mostly used by new firms, and similarly it’s not relevant whether ML will create its own economy.

        It’s not clear if my comments are relevant to this particular post, since e.g. people who are most enthusiastic about AI aren’t talking about it as “prediction” tech. And of course the human-level RL optimists would agree that many applications of ML would be negative-value, just as an em optimists might agree that many applications of shiny new imaging tech aren’t really worth it, while still believing that imaging will ultimately lead to ems which will be a huge deal. But my comments are relevant if you are making an effort to understand the reasons why some smart young futurists you know are so enthusiastic about AI. (Many don’t put particular stock in the current deep learning wave, but nevertheless see there as being underlying AI technologies that will satisfy my properties [a] and [b].)

      • Joe

        I don’t see why such techniques would be especially lumpy. Why won’t the many intermediate products of such brute force techniques, before human-level AI, also be economically useful?

        Like Robin, your post made me imagine a separate side economy, running faster than our human economy, with advances in that economy slowly replacing jobs done by humans, but with none of its contents humanly comprehensible; just a black box building on itself, using its own internally generated mechanisms instead of ours. If this isn’t what you envision, then what?

      • Paul Christiano

        For example: if one goes by processing power or evolutionary timescales, the difficulty of creating lower animals’ brains might be nearly as large as creating human brains, while being radically less useful.

        The more general argument is that there is a process of cultural accumulation and tool-building, and that making AI in a way that doesn’t interact with that process is liable to not create much value until it is either (a) good enough to import human tools/culture, or (b) powerful enough to replicate those results. Either way you would get a lot of lumpiness (since getting to something 90% as good as human tools wouldn’t necessarily add much value).

      • Joe

        Why don’t you expect earlier, more primitive brains produced this way to also be economically valuable? Also, I’m not sure on why you’d need an especially high level to import human tools. All the software we use today is not remotely ‘intelligent’ in the human sense, yet manages to interconnect fine. Why wouldn’t our standard software be integrated with and used by these evolved minds from the beginning?

      • http://overcomingbias.com RobinHanson

        “brute force search strategy .. mostly orthogonal to the usual process of codifying human knowledge and cognitive tools in software” Not clear where you think this is happening if not in firms – perhaps in research labs?

        For ems you don’t expect any results until the three techs of computers, scans, and cell models reach enabling levels. You can track those techs, but not their impact via ems. You seem to be imagining some search for a single silver bullet that does everything, but with little in the way of useful precursors before the bullet is found.

      • Paul Christiano

        Robin: I imagine it happening in academia and at research labs in large software firms (to see why this is might seem irrelevant: “where do you expect imaging progress to come from, if not firms?”).

        The story doesn’t depend in any way on silver bullets.

        If there were a convincing demonstration of flexible rat-level intelligence, would you consider that a reasonable measurement of our ability to optimize things-like-brains?

        Do you expect rat-level intelligence to be a significant fraction as valuable as similarly-cheap human-level intelligence, given that rat brains involve 0.1% of the computing power and much more than 0.1% of the optimization effort by nature? It looks to me like a rat wouldn’t be much more useful than a linear regression.

        If you think that we might soon build something like rat intelligence (as do the AI enthusiasts) using tools that face the same kind of difficulty curve as evolution (for which humans were only modestly harder to build than rats), then that seems to suggest the field might relatively quickly move from rat-level economic impact to human-level economic impact.

      • http://overcomingbias.com RobinHanson

        You concept of intelligence is to me surprisingly context independent. Rather than getting smart by getting better at doing particular useful tasks, you imagine systems that do almost nothing useful getting “smarter “in the background until are suddenly very very useful.

  • Joe

    I think much of the excitement about neural networks stems from their similarity to human brains. Since brains are also made of neural-network-like structures, to recreate each brain subsystem might just take a few tweaks from our existing ANN designs. And once we’ve done that then we will have reproduced all of the functionality of the brain, and mass automation can ensue.

    In other words it comes back to your “Brains Simpler Than Brain Cells?” post from a few weeks ago. I think many people just can’t imagine how brains couldn’t be simpler than brain cells – their model of the brain is that it’s just a huge lump of neurons, that spontaneously structure themselves to produce useful functionality. As evidence they point to the flexibility of the brain in how it can adapt to damage. One reason for this might indeed be if there really is no complexity to the brain on a macroscopic level. Another might be if, as you suggested in an earlier post, brain volume is more like resources than lines of code, and the software in our brains is capable of switching to use different resources when necessary.

  • Sarah Constantin

    This is my profession and I very much agree with this post.

  • DavidRHenderson

    Really excellent post, Robin.

  • Hashin Jithu

    Well written and to the point Robin, thank you for writing this!

  • Tim Tyler

    I’m inclined to see this post in the context of the “machine intelligence” and “brain emulation” folks bashing each other’s technical progress. Each team often denigrates the efforts of the other team. It’s a race to the future and nobody likes to see the other side get ahead. That’s not to say that Robin is wrong – but there’s a lot of motivated cognition on both sides, and when a member of one team says that the products of another team have serious limitations and no future, other people should take that with some pinches of salt.

  • hypnosifl

    With robots becoming rapidly better at performing relatively straightforward tasks in real-world environments (see here and here and here for some nice examples), isn’t it plausible that the majority of manufacturing work can be automated in the next couple decades or so? Likewise with most other relatively unskilled physical labor jobs like warehouse workers, people in construction, natural resource extraction like mining and the timber industry, and of course transportation jobs like truck driving. A lot of service jobs, like waiters, cleaning services, cooking, etc. could also be replaced in the near future. Basically I think the effects on what we generally think of as “blue collar” work could be huge, and humans are not really interchangeable learning machines–it’s not so obvious that the people who have lived their lives doing blue-collar work can easily retrain to become skilled at the types of jobs that require special intellectual, creative, or social skills (programmer, artist, and therapist for example). In an ideal world where anyone could retrain to do these types of jobs it might be true that the loss of other jobs would simply result in new jobs replacing them as in previous cases where automation eliminated certain types of jobs, but if people aren’t really so flexible, that might be a good reason for thinking “this time is different”.

    • Christian Kleineidam

      It’s interesting that you give Boston Dynamics as “robots becoming rapidly better at performing relatively straightforward tasks in real-world environments”. There’s a reason Google wants to sell it.

      • hypnosifl

        But do you think the reason they want to sell it is because they don’t in face believe robots are becoming rapidly better at performing such tasks, as opposed to some other reason like not thinking this research will lead in the very near-term future (say, the next 5 to 10 years) to a household robot that’s actually more affordable than just hiring a housekeeper?

  • randcraw

    After a good look behind the curtain of Deep Learning, I’ve come to agree with Robin. Yes, DL has proven itself to perform gradient descent tasks better than any other algorithm. It maximizes the value in large data, minimizing error brilliantly. But ask it to address a single feature not present in the billion images in ImageNet, and it’s lost. (E.g. *Where* is the person in the image looking? To the left? The right? No DN that was trained with ImageNet labels could say.) This is classic AI brittleness.

    With all the hoolpa surrounding DL’s successes at single task challenges (mostly using labeled images), we’ve failed to notice that nothing has really changed in AI. The info that’s available from raw data remains as thin as ever. I think soon we’ll all see that even ginormous quantities of thinly labeled supervised data can take you only so far. Quickly it will become clear that a truly useful AI agent will need info that isn’t present in all the labeled images on the planet. In the end it still needs a rich internal model of the world that it can further enrich with curated data (teaching) to master each new task. And to do that, it needs the ability to infer cause and effect, and explore possible worlds. Without it, an AI will always remain a one trick pony.

    Alas, Deep Learning can’t fill that void. The relevant information and inferential capability needed to apply it to solve new problems and variations on them — these skills just aren’t there. To create a mind capable of performing multiple diverse tasks, like the kinds a robot needs in order to repair a broken toaster, I think we’ll all soon realize that DL has not replaced GOFAI at all. A truly useful intelligent agent still must learn hierarchies of concepts and use logic, if it’s to do more than play board games.

    • http://phailed.me/ Phailure

      I think you’re missing the point of DL. The new ML boom actually started when people realized that unsupervised learning is an essential aspect of ML. In fact, your entire complaint that traditional ML does not generalize to problems such as finding out if a person is looking to his left or his right (and there are learned feature representations, without having humans specify that these features are important or even that they mean something to us, that answer this exact query) is what the ML community is currently trying to solve: a general method of computing useful features for a given data that works on all tasks related to the same problem representation. The DL renaissance began when people realized that, rather than hard-coding these features manually, we can instead force our objective to (implicitly) encode these features themselves. This is what the “depth” of these networks entail: a way to transform raw features (such as pixels) into useful and relevant features of that representation. We figured out how to interpret these “deep” nets in the 90s. In the late 2000s, we also figured out that the representations learned by the hidden layers of these deep nets actually works well for a multitude of other tasks on the same underlying environment. Features learned from natural language work well in prediction tasks, translation tasks, Q&A tasks, and a wide range of natural-language related tasks.

      Now, this isn’t saying that DL is a silver bullet. It’s just a technique of generalizing optimization tasks. To conflate ML and true artificial intelligence is to misrepresent the goals of ML right now. In general however, I agree with Robin, to an extent. There is an ML bubble, and it will inevitably burst. The flow of capital into ML R&D disproportionately outweighs its estimated utility. Sooner or later, DL will settle down into a circlejerk of finetuning hyperparameters. That doesn’t mean that it isn’t important, that’s just what the ebb and flow of the adoption cycle looks like.

      • randcraw

        I think we agree that Robin’s title is provocative and that while DL has done great things, it isn’t clearly sufficient to create HAL 9000. But I also think that last point is a really big admission that isn’t much discussed even among AI folk. Or maybe I’m just oblivious to such exchanges…

        Yes, I know we don’t yet know the practical limits to what DL can learn, but we do know many of the limits inherent in what neural nets do. Since the use of DL nets is fundamentally constrained in the same ways NNs are, we should have some good ideas of DL’s eventual limits in instantiating AI agents — how their embedded knowledge may be extracted and applied to tackle tangential problems that are soluble by such constituent info but were not intended by the net creator’s initial design.

        Until a robust algebra for ‘thought vectors’ arises, I’ll continue to believe (as Robin implies?) that DL nets are unlikely to be anywhere near as informationally deconstructable into the many capabilities needed to undergird General AI, which is an end by which any ‘AI technology’ will forever be measured.

      • davidmanheim

        The question shouldn’t be whether “DL will settle down into a circlejerk of finetuning hyperparameters” – it’s how far it gets first. If we get 10-15 more years of significant progress first, it’s not impossible for it to achieve human or near human performance at enough tasks that it does “remake the world economy”

  • Peshgaldaramesh

    Hey Robin, do you have any solid evidence that Deep learning will fail. Just curious.

    • Patrick Pekola

      But Robin didn’t argue that deep learning is going to fail?

      • Peshgaldaramesh

        He’s arguing that deep learning will have limitations that will prevent it from being incredibly disruptive. I’m looking for data and or research as evidence rather than analogies.

      • davidmanheim

        I don’t understand your approach, or what you think you’re looking for. Any possible data about future trends is going to need to be used for out of sample prediction – since the future of a trend is, by definition, out of sample. That means that they depend on the model being used as much as the data – and the choice of model is simply an argument about the proper analogy.

    • hahvM

      evidence need not only take an empirical form

  • james_blunt

    LOL. You are joking.

    A quote from 20 years ago – Star Trek: First Contact (1996):
    Borg Queen: Small words from a small being, trying to attack what it doesn’t understand.

    Also:
    Borg Queen: Brave words. I’ve heard them before, from thousands of species across thousands of worlds, since long before you were created. But, now they are all Borg…
    Borg Queen: You are an imperfect being, created by an imperfect being. Finding you weakness is only a matter of time.
    Borg Queen: I am the beginning. The end. The one who is many. I am the Borg.
    Borg Queen: [to Picard] Watch your future’s end.

    The (dystopian) future IS Borg.
    Unless WW3 happens first: https://medium.com/@sudo_script/war-is-coming-dec11bd4c334#.2t01vw8tk

  • Pappa W

    I have just recently come across your blog which I immediately started reading regularly. Your contrarianism and anti alarmism are refreshing. I am leaning towards a similar conclusion on this latest wave of AI enthusiasm.

    Slightly off topic: since we humans are not always very good at judging each others humanity, the thought of a computer eventually passing the Turing test is not necessarily as important as it might seem. But is there a name for the possible event when a computer outperforms every human being on the Turing test? To me, that event seems more important, at least from a singularity point of view.

    • http://www.greenrd.org/ Robin Green

      The Turing Test is not a comparison of ability along one performance axis, it is a test of general ability to sound human. So to speak of an AI “outperforming” humans at sounding human is, I think, a mistake. We would expect illiterate humans to have great difficulties “passing” the Turing Test, of course, but apart from those people, and people with very low IQs, every human should easily be able to “pass” the Turing Test when pitted against a machine, so it is not a question of “performance”. I think it is more like being pregnant – either you sound human, or you don’t. Judges for the Turing Test should of course also be selected from people who aren’t very gullible, otherwise the entire exercise is rather pointless. But that’s true of all competitions that involve a high risk of fraud.

      • Peter David Jones

        > We would expect illiterate humans to have great difficulties “passing”
        the Turing Test, of course, but apart from those people, and people with
        very low IQs,

        What about people with low EQs, lack of humour, etc?

      • Pappa W

        So todays computers only appear more human than low iq/eq individuals. But chess computers could only beat lousy players in the 80s, Watsons predecessors got their asses kicked even in junior jeopardy, computers sucked at face recognition in 2000, etc etc.

        Now imagine a time when computers are better at telling humans from computers than we are. Fascinating, and a little scary.

      • hypnosifl

        So todays computers only appear more human than low iq/eq individuals.

        I think if you did a sufficiently long-term Turing test–say, a daily hourlong conversation for several months on the end–then as long as the person was capable of verbal communication (they could use a voice recognition program if they were illiterate) and didn’t suffer from some issue like schizophrenia that caused them to speak in nonsensical word salad, then even a very low iq/eq individual could be fairly easily distinguished from any present-day AI.

  • zarzuelazen27

    Robin’s right. Most ML is actually very simple….you have two things A and B and you want to know the rule that shows how they are correlated. A ——-> B rule? That’s it! All the complex stuff is just people playing ‘signalling games’ in order to look impressive and get funding.

    We can also confidently predict that no one will listen to Robin. The young simply don’t listen to the old 😉

    My ‘big picture’ analysis indicates 6 main components are neeed for AGI, and the machine learning/statistics revolution plausibly only covers 2 of them

    Math Component>>>>Implementation component

    Categorization >>>>Goals/Motivation

    Probability theory>>> Pattern recognition, prediction (ML)

    Symbolic logic >>>> Optimization, planning

    So there are 3 levels, and machine learning only covers the middle row (2nd level). Therefore, ML can only get 1/3rd of the way towards AGI at most.

    Key point to note: It’s clear that the ‘control problem’ (goals/motivations) has little to do with probability theory/ML, but rather is strongly linked to Categorization/Concepts, the very area that’s currently only dimly understood.

  • Steven

    Good point about self-driving technology, which to date is stupendously uneconomic compared to its next lower substitute — railroads. Not even high end use — say, trucking on interstate highways — justifies replacing rail with self-driving tech. It should remain in the realm of back-up assist.

    • Srini

      We need to compare manual driving cars vs self driving cars. Self driving cars would give back additional 1hr to 2 hours/day, which could add up to productivity (or Netflix 😉 ) . Why wouldn’t anyone opt for it ? Cars vs Railroads is a different topic. There could be non-economic reasons for car adaptation.

      • hahvM

        simple because people like to drive and be in control

      • Srini

        Yes. Question is what percentage ? I used to have a 5 speed manual transmission car. Got tired of using it in daily commute and switched to automatic. When you see every one else enjoying a movie or working in a self driving car next to you, would you still want to drive, especially during daily commutes ?

      • hahvM

        It will all depend on a whole lot of things. For instance, my guess is that self driving cars will not be “aggressive” (for obvious reasons) which means that normal drivers (and pedestrians!) will probably be able to take advantage of the cars. Do you always want to be in the car that get’s cut off? 😉 <3

        But also if you're in California, your perspective on driving, you must understand, is very different than perspectives of those on east coast.

      • Astaldaran

        Cars are one of the biggest killers in the first world. Self driving cars would not only eliminate most deaths and injuries with huge economic and social advantages but would also allow huge direct economic benefits. The number of cars on the road could be reduced heavily (most cars are only used a tiny fraction of the day, now cars can be used a lot more of the day). This would also reduce the amount road, capital costs of getting to work for poorer people, etc.

      • hahvM

        That calculus only holds if hacking potential is strictly held at bay. I write this from outside Manhattan, where a notion of self-driving cars in a city with one of the most robust public transit infrastructures in the world seems just masturbatorial.

      • Mark Bahner

        “I write this from outside Manhattan, where a notion of self-driving cars in a city with one of the most robust public transit infrastructures in the world seems just masturbatorial.”

        If the public transit infrastructure is so wonderful, why are the streets of NYC crowded with human-driven cars?

        Per wonderful Wikipedia’s review of ~30 U.S. cities, NYC has the highest percentage of workers using public transit (~55 percent) AND the longest commute time (~38 minutes).

        https://en.wikipedia.org/wiki/Transportation_in_New_York_City#/media/File:USCommutePatterns2006(2).png

        That doesn’t look like spectacular public transit to me.
        What autonomous vehicles are going to do for NYC are to provide door-to-door service without need for parking. Autonomous vehicles will also completely eliminate “choke points” caused by such things as accidents, road construction, and bridges. It will be huge…even in NYC, the public transit capital of the U.S.

      • http://www.bluetriangletech.com/ Donald E. Foss

        Long commute times does not equate to inefficiency. There are other outside factors affecting the commute time like distance. For the subway cars you go faster, they must accelerate faster, requiring more energy usage, which increases costs; or longer distances between stops, which reduces utility. NYC has a good blend of express and regular trains. I love London and it’s tube and train system, but express tube lines would be fabulous (I mean utterly brilliant!) to use, versus stopping at virtually station.

      • hahvM

        Hunny, my friends and I are not the ones driving the cars. I can’t help it if the bridge and tunnel folks like driving into town.

        You are profoundly out of touch. Do you live in New York? Have you tried using public transit for a year?

        Do yourself a favor and do it and get back to me. You clearly do not live in a public-transit city.

      • hahvM

        “Autonomous vehicles will also completely eliminate “choke points” caused by such things as accidents, road construction, and bridges. It will be huge…even in NYC, the public transit capital of the U.S.”

        Again, will never happen, maybe in 150 years, if Manhattan isn’t under water.

      • http://www.bluetriangletech.com/ Donald E. Foss

        I don’t know if that last word was a typo, autocorrect, or written on purpose, but it’s hilarious!

      • http://www.bluetriangletech.com/ Donald E. Foss

        There are also economical and social effects from that scenario. It’s true that driving is the leading cause of death in Western countries. If we factor that large number back into the population, with the added population growth that comes from it, more economic productivity or options will be required to support the increased number of people.

        I clearly want to see people live longer, disease eradicated, minimal accidental and avoidable deaths. I also think of the implications to all strata of society, how they will all be supported, and their true quality of life.

        We can never escape the law of unintended consequences.

      • http://www.bluetriangletech.com/ Donald E. Foss

        Manual transmissions don’t compare well too the psychology of feeling in control. Why do people fear flying more than driving when driving is exponentially more dangerous? That same feeling of being in control of one’s destiny, even though it’s much deadlier. Humans become accustomed to things around them and fear the unknown.

        Driverless personal vehicles will take a very long time to achieve an 80% adoption rate, as in multiple generations. Driverless freight trucks and taxis are a different story.

    • Pauly Mole

      This is very true, self-driving cars still don’t solve the car problem. In-fact, they’ll just make it worse.

      You have to wonder why more money isn’t being invested into railroad tech, cheaper fast trains etc.

    • arch1

      It seems possible there could be a blending of self-driving and rail technologies – imagine SD cars which can dynamically join/leave trains of cars, switch wheel types, etc.

    • http://www.bluetriangletech.com/ Donald E. Foss

      Railroads are a combination of both low and high tech. While the quality and cost associated with laying track may have improved, having sufficient engines to pull freight cars is cost prohibitive to lay track and bring freight all the way to its destination. Rail tracks are a single purpose medium, unlike roads, which can be used by many types of wheeled vehicles.

      Due to this, the hub methodology is used instead, and freight moves across shared mediums to their destinations, and either unloaded at a commercial establishment, or redistributed into smaller vehicles for delivery to individuals.

      The same additional learnings mentioned by a previous component also apply to drones for home delivery.

      I still believe that the ripple effect from the minimization of human freight truck drivers will have a much larger impact than many imagine, more so than previous historical disruptions. The pace of discovery, implementation and impact has been steadily increasing, not remaining the same. The application of the discovery in adjacent areas usually has the biggest impact. The same economic principle applies here too.

  • Srini

    What is “new this time” is that computers/machines can see and hear, whether we call it AI or “Deep Learning” or “Machine learning”. So there is going to be more impact from Vision/Voice based applications rather than data analytics/predictions. Eg. Self Driving Cars, Medical Diagnosis, Video Analytics, Autonomous machines/Robotics, Voiced based interfaces. All of these combined, could be as disruptive as Internet itself.

    • Matthew

      The major factor here is we have no idea how fast AI will really progress. True AI would potentially start learning at a significantly faster and faster pace and COULD advanace much quicker than we predict. We really aren’t talking about true AI, just machine learning. but these days are calling that AI more and more. IF machine learning or other techs bring us true AI rapidly, it will entirely chance the predicted pace of things and we have no history of genetic or evolutionary programming by which to judge the expected progress.

      Things may happen much faster than we realize and, from my perspective, they are, but that doesn’t mean we will hit a plateau. Still, the nature of how machine learning does most of the work on it’s own through billions of cycles would be easy to scale up by thousands of times today’s levels. If coding become more automated while computing becomes cheaper, we can expect at least some rising probability that AI progress has exponential gains left to be made. We’ve gone from self driving cars seems impossible, to self driving cars are on our roads within a very small window of time. Watson is paging through academic research faster than all of humanity combined and already finding things we missed and Waston is extremely immature technology.

      • http://www.bluetriangletech.com/ Donald E. Foss

        Do you have some examples of what Watson found that we missed? Not being snarky, I’m genuinely interested and curious.

      • Mark Bahner

        “Do you have some examples of what Watson found that we missed? Not being snarky, I’m genuinely interested and curious.”

        http://www.cbsnews.com/news/60-minutes-artificial-intelligence-charlie-rose-robot-sophia/

        Ned Sharpless: We did an analysis of 1,000 patients, where the humans meeting in the Molecular Tumor Board– doing the best that they could do, had made recommendations. So not at all a hypothetical exercise. These are real-world patients where we really conveyed information that could guide care. In 99 percent of those cases, Watson found the same the humans recommended. That was encouraging.

        Charlie Rose: Did it encourage your confidence in Watson?

        Ned Sharpless: Yeah, it was– it was nice to see that– well, it was also– it encouraged my confidence in the humans, you know. Yeah. You know–

        Charlie Rose: Yeah.

        Ned Sharpless: But, the probably more exciting part about it is in 30 percent of patients Watson found something new. And so that’s 300-plus people where Watson identified a treatment that a well-meaning, hard-working group of physicians hadn’t found.

        Charlie Rose: Because?

        Ned Sharpless: The trial had opened two weeks earlier, a paper had come out in some journal no one had seen — you know, a new therapy had become approved—

  • AnotherScaryRobot

    I think it’s probably true that the current AI boom won’t give us a huge number of new capabilities, but we should be careful not to confuse capabilities with applications.

    Take only the capabilities implied by self-driving cars — object recognition, real-time modeling and navigation of physical spaces, using behavior to infer intent.

    You can repackage these capabilities to automate visual QC processes on assembly lines, replace retail cashiers and shelf stockers, set up surveillance systems that could reduce the need for security or law enforcement personnel, automate warehouse operations, add metadata to footage to make human video editors much more productive, build augmented reality interfaces that understand the physical spaces around themselves well enough to visually integrate virtual objects into them… really, we could add to this list all day.

    When a claim is made that 47% of current employment is at risk, this isn’t actually a claim that machine learning algorithms can do everything 47% of the workforce does. We have many automation technologies other than machine learning algorithms. What machine learning unlocks, by letting machines understand irregular input better, is largely just the ability to employ these existing automation technologies more broadly. We’re not talking about a single class of algorithms coming to dominate the economy; we’re just talking about a widely applicable key enabling technology.

    Another example of a such a technology might be, say, database software — it’s employed everywhere, and is implicated in trillions of dollars worth of economic activity. But we don’t attribute all of that activity to the database industry, or even think of most products that contain or use databases as database products.

  • Tukabel Kožmeker

    As usual almost all “big discussions” are “about nothing” – as it will be seen from the historical perspective (like dark age catholiban “scholars” discussing number of angels on the needletip, or dark age communist “scholars” discussing how guilty trotskyites are).

    You know, Evolution is about evolution of Intelligence, and the sole purpose of existence of humanimals is to create their (first nonbio) successor before (inevitable) self-destruction.

    It’s funny to observe how the society is basically governed by the same principles as 100-1000-10000 years ago (politico-oligarchical predators using various mindfucker shamams to live off the herd of mental herbivores – nowadays “voters” kept in line by “socialist religion”, cowardly even thinking they do good)… while the memetic supercivilization living in brains of less than one per mill gives these humanimals all the idews, science and subsequent inventions, cool gadgets to the plebs and deadly power to the predators… for free and without any control.

    Already nukes were just-just and the coming nanobots will be orders of magnitude worse… so,please, Singularity, speed up, you do not have 100 years, but rather 20-40.

  • Mark Bahner

    “Some see this new fashion as just first swell of a tsunami that will soon swallow the world. For example, in 2013 Frey and Osborne famously estimated:

    About 47 percent of total US employment is at risk .. to computerisation .. perhaps over the next decade or two.”

    The problem there is that no one can visualize what “total U.S. employment” is like. No one can envision all the jobs, and how they would be vulnerable. It’s more valuable to just take a subset of the *top* (most common) jobs in the U.S., and to evaluate how vulnerable those jobs are to AI:

    http://markbahner.typepad.com/random_thoughts/2014/11/jobs-vulnerable-to-artificial-intelligence-part-2.html

    I came up with similar numbers…though in 30 years, rather than 20. But it’s a lot easier to evaluate/critique an analysis that’s only based on the top 15 job categories, rather than all jobs in the U.S.

  • http://folioverse.appspot.com/ omni scient

    http://i.imgur.com/FqobtB7.jpg

    It appears the article’s writer had not any internet access for quite a while.

    Here are a sequence cognitive fields/tasks, where sophisticated neural models EXCEED mankind:

    1) Language translation (eg: Skype 50+ languages)
    2) Legal-conflict-resolution (eg: ‘Watson’)
    3) Self-driving (eg: ‘OTTO-Self Driving’ )
    5) Disease diagnosis (eg: ‘Watson’)
    6) Medicinal drug prescription (eg: ‘Watson’)
    7) Visual Product Sorting (eg: ‘Amazon Corrigon’ )
    8) Help Desk Assistance (‘eg: Digital Genius)
    9) Mechanical Cucumber Sorting (eg: ‘Makoto’s Cucumber Sorter’)
    10) Financial Analysis (eg: ‘SigFig’)
    11) E-Discovery Law (eg: ‘ Social Science Research Network.’)
    12) Anesthesiology (eg: ‘SedaSys’)
    13) Music composition (eg: ‘Emily’)
    14) Go (eg: ‘Alpha Go’)

    • Josh

      When I read this article, I understand it to be talking about the things that most businesses will be able to get value out of now and in the near future.

      This is a great list of things that deep learning may one day be able to help with, but I don’t think it’s a list of things that most businesses can use right now – have you ever interacted with a DL based help desk? Have you listened to the quality of the music composed by DL? We’re a long way out from these being acceptable.

      Just because you can list 14 preliminary attempts which produce OK demos, doesn’t mean there is widespread application.

      And exceed mankind? A lot of these don’t – are you trying to tell me that the compositions by Emily exceed Beethoven? The Beatles? I don’t think so, and I don’t think you’ll find many people in agreement if you do.

      P.S, Cucumber sorting? There’s a lot of demand for that! 😂

  • Laurie Paulin

    The impact of AI is always presented as such an absolute, e.g.’it’s going to take all jobs’ in a sector. The reality is that it doesn’t have to be that effective to wreak havoc. In terms of supply and demand, even if it only 20% of jobs in a certain industry or demographic were ‘taken’, that would significantly impact salaries. If there was no alternative, I am sure most people would accept a 10%, 20%, 30% decrease in remuneration to stay employed, to be more attractive then the next person competing in a lowered demand scenario. In a similar vein, even if it only increases productivity by 20%, that is still significant and industry/government/armed forces/education/logistics/et al will be compelled to utilise it.

    So, no, I don’t think it’s going to bust. And I don’t think it is has to be that effective to have profound implications for society. And for the sake of our children, we need to wake up to what it really means and regulate it.

    • Mark Bahner

      “And for the sake of our children, we need to wake up to what it really means and regulate it.”
      “Regulate” AI? What would that involve?

  • Carl Gold

    Thank you so much for raising this! I 100% agree. The “this time is different” mentality in the current AI wave will be proved wrong – this AI boom will also bust, and another AI winter will come. Here’s my take: Everyone is infatuated with the ability to predict, but most people overestimate the number of domains where prediction can be translated directly into an action with a positive ROI. You don’t just need to predict, you need to act in such a way as to influence the outcome. Usually that requires understanding of cause and effect. Guess what? Now you’re doing real science, not data science. Linear model with interpretable coefficients on cleaned up data sounds good all of a sudden! But okay, I’m going overboard to make a point. There is amazing progress being made in lots of areas and tons of great new applications coming. So I think the next AI winter may be more of a California winter, where lots of things continue to thrive and grow. I’m just saying the hype has outpaced the reality.

  • Scott Porter

    Many cases of ML can be replaced by linear regression, but if so, you probably aren’t doing the right analysis. People ask me if I’m a data scientist. Sure, but that’s not the focus. I’m a complexity scientist first. I completely agree with some of the posts that mention that there are only certain use cases where you can get away with just prediction. Many more you have to figure out how the system works or you will make the wrong decisions.

  • Pingback: 2016 wrap-up | betweenbeats

  • Pingback: More Y Combinator Drama

  • Pingback: More Y Combinator Drama – Insurance

  • Pingback: Revue de presse de janvier - Blog Arolla

  • Pingback: AI could be hugely impactful on jobs but we can compare to past innovations and dot.com era

  • Pingback: Intelligence artificielle : art ou artifice ? -

  • Pingback: Intelligence artificielle : art ou artifice ? – latitude77

  • Pingback: Machine Learning and Artificial Intelligence have limits that few have yet appreciated | Smash Company

  • Pingback: Issue #80 | H+ Weekly