Missing Engagement

On the surface, there seems to have been a big debate over the last few years on how fast automation will displace jobs over the next decade or so. Some have claimed very rapid displacement, much faster than we’ve seen in recent decades (or centuries). Others have been skeptical (like me here, here, here, and here).

On October 13, David Mindell, Professor at MIT of both Aeronautics and Astronautics, and also History of Engineering and Manufacturing weighed in on this debate, publishing Our Robots, Ourselves: Robotics and the Myths of Autonomy:

If robotics in extreme environments are any guide, Mindell says, self-driving cars should not be fully self-driving. That idea, he notes, is belied by decades of examples involving spacecraft, underwater exploration, air travel, and more. In each of those spheres, fully automated vehicles have frequently been promised, yet the most state-of-the-art products still have a driver or pilot somewhere in the network. This is one reason Mindell thinks cars are not on the road to complete automation. ..

“There’s an idea that progress in robotics leads to full autonomy. That may be a valuable idea to guide research … but when automated and autonomous systems get into the real world, that’s not the direction they head. We need to rethink the notion of progress, not as progress toward full autonomy, but as progress toward trusted, transparent, reliable, safe autonomy that is fully interactive: The car does what I want it to do, and only when I want it to do it.” (more)

In his book, Mindell expertly supports his position with a detailed review of the history of automation in planes, spacecraft and submarines. You might think than Mindell’s prestige, expertise, and detailed book on past automation rates and patterns would earn him a place in this debate on future rates of automation progress. Many of those who blurbed the book clearly think so:

“Mindell’s ingenious and profoundly original book will enlighten those who prophesy that robots will soon make us redundant.”—David Autor

“My thanks to the author for bringing scholarship and sanity to a debate which has run off into a magic la-la land in the popular press.”—Rodney Brooks

But looking over dozens of reviews Mindell’s book in the 75 days since it was published, I find no thoughtful response from the other side! None. No one who expects rapid automation progress has bothered to even outline why they find Mindell’s arguments unpersuasive.

Perhaps this shows that people on the other side know Mindell’s arguments to be solid, making any response unpersuasive, and so they’d rather ignore him. Maybe they just don’t think the past is any guide to the future, at least in automation, making Mindell’s discussion of the past irrelevant to the debate. I’ve known people who think this way.

But perhaps a more plausible interpretation is that on subjects like this in our intellectual world, usually there just is no “debate”; there are just different sides who separately market their points of view. Just as in ordinary marketing, where firms usually pitch their products without mentioning competing products, intellectuals marketing of points of view also usually ignore competing points of view. Instead of pointing out contrary arguments and rebutting them, intellectual usually prefer to ignore contrary arguments.

This seems a sad state of affairs with respect to intellectual progress. But of course such progress is a public good, where individual contributions must trade a personal cost against a collective benefit, encouraging each of us to free-ride on the efforts of others. We might create intellectual institutions that better encourage more engagement with and response to contrary arguments, but unless these are global institutions others may prefer to free-ride and not contribute to local institutions.

You might think that academic norms of discourse are such global institutions encouraging engagement. And academics do give much lip service to that idea. But in fact it is mostly empty talk; academics don’t actually encourage much engagement and response beyond the narrow scope of prestigious folks in the same academic discipline.

GD Star Rating
loading...
Tagged as: , , ,
Trackback URL:
  • IMASBA

    “If robotics in extreme environments are any guide, Mindell says, self-driving cars should not be fully self-driving. That idea, he notes, is belied by decades of examples involving spacecraft, underwater exploration, air travel, and more. In each of those spheres, fully automated vehicles have frequently been promised, yet the most state-of-the-art products still have a driver or pilot somewhere in the network. This is one reason Mindell thinks cars are not on the road to complete automation…”

    Depends on your definition of “fully” self-driving. All serious schemes I’ve seen have GPS-navigation with the cars having maps of the roads in their memories (and sometimes additional infrastructure in or near the roads). In most places (especially the major population centers of the world) the car could get you from your home to your work while you’re sleeping on the backseat, but it might not always successfully navigate a donkey- and motorcycle-ridden unofficial dirt road in Mogadishu (at least not for an additional couple of decades).

    “But perhaps a more plausible interpretation is that on subjects like this in our intellectual world, usually there just is no “debate”; there are just different sides who separately market their points of view. Just as in ordinary marketing, where firms usually market their products without mentioning competing products”

    Probably…

  • Lord

    I do agree these arguments often take the form of throwing up straw men and knocking them down rather than engaging them and this sounds like another. Instead of yea or nay, it would be more informative if they attempted to place bounds and estimate magnitudes. Even if not fully autonomous, it would be sufficiently different to qualify as a wholesale displacement. It would be far more interesting to estimate the turnover of capital stock, providing a guide to how new technology enters into the market, how rapidly, which jobs are displaced, and how others are transformed, what are the measured tipping points, though much of this would likely end up proprietary.

  • FuturePundit

    As I said in a Tweet: There are too many books and too few readers. I’d make a bigger reader contribution if only I didn’t have to work for a living.

    There is the argument that full autonomy is necessary because drivers will stop paying attention and won’t shift back to having attention fast enough if needed. You can hear that argument from some autonomous vehicle developers who’ve studied the behavior of their test drivers.

    So then is it achievable? Is the past predictive? Well, the past is full of devices developed for much smaller markets and with many Moore’s Law undoublings. Go back 10-12 years and a device developed then had about 1/2^5 computational power. Take a device developed for deep space and they had severe electric power budgets. I say that as someone who has code executing outside Earth’s gravity well.

    There’s also multiple definitions of “fully autonomous”. Depends on your requirements. With same accident rate as humans? With one tenth the accident rate of humans? With one one hundredth the accident rate of humans? On all roads or only on roads identified as having good road painting and signage? In all weather conditions? Or only dry? Obviously I could go on.

    Why these requirements differences matter: If you are legally blind or too old with bad coordination or with MS or Parkinson’s Disease but could buy a car that would take you some places some of the time would you buy it? Fully autonomous under some conditions seems like a great improvement for all those people. If they had to move to a street with good signage near roads with good signage I’d like a lot of them would move in order to gain some degree of mobility.

    Similarly, suppose you operate a fleet of long haul trucks. Freightliner says they can sell you a truck that is autonomous for 90% of the Interstate highway system as long as there is no snow or severe rain and that the truck will pull over and stop when the conditions get bad. You going to buy? You will need to send a driver to the highway to drive the truck in from the highway after a couple thousand mile trip. Is this need a deal breaker? I think not. I think you can staff for that need.

  • J.j. Cintia

    This guy Mindell is a putz. In the 1960s they had a computer the size of an entire room which didn’t have the capacity of a handheld device today. Most of the people with “degrees” have their heads full of worthless pseudoscientific crap like Global Warming Hysteria about saving owls and that old conundrum of how to make life fair for everyone. Computers are now smart enough to recognize faces and objects. Soon they will be able to talk to you and walk around on a robotic frame. I imagine they’ll not be programmed in psychobabble and actually have useful skills.

    • https://www.facebook.com/app_scoped_user_id/1026609730/ Jim Balter

      Dunning-Kruger nailed you.

  • Daniel Carrier

    I disagree with the point about cars. The high usage and danger involved in non-autonomous cars means that there will be a large budget in getting everything right, and they exist in a controlled enviornment. We’ll likely get the software to work 99.9% of the time, and then fix roads for the other 0.1%. Also, I consider them “fully autonomous” when someone who doesn’t know how to drive can get around fine. There’d still be places where you need a human to drive, but in general the driver is eliminated. It’s not like a plane where they just do nothing most of the time. They rarely need to be present.

    That being said, I agree with the more general case. Automation will slowly get rid of jobs. If we start basic income we’ll likely see a rapid elimination of jobs, but it will be jobs we stop bothering with, not ones getting automated. And unlike Hanson, I think if there’s ever superintelligent AI, that will be rapid and everything will suddenly be automated, but that’s clearly not what’s happening right now. Cars will only get fully automated because they can cheat.

  • http://overcomingbias.com RobinHanson

    I see many commenters here eager to write that critical review of the Mindell book, but not willing to actually read the book before writing it! Please, READ first, THEN review.

    • consider

      My points have been the same as Elliot’s points. I also don’t want to read the book after listening to an hour long ecotalk where he supposedly gave his best arguments. We can review an in-depth interview as we can review a book.

    • https://www.facebook.com/app_scoped_user_id/1026609730/ Jim Balter

      Does that go for the latest book on astrology, as well?

      The fact is that no one here has said anything about wanting to write a critical review of the book … you’re simply misrepresenting what people actually are saying.

  • charlies

    one random counter-example:

    Piketty was fairly thoroughly “engaged” by the economics discipline the year after he published his book.

    • http://overcomingbias.com RobinHanson

      But that was mainly by people within the same discipline.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        It was surely more than that. But I gather that, while you write of “intellectual” world, your scope is limited to the academic world.

      • charlies

        When you used “on the other side” in the post, I thought you were referring to those who disagree with Mindell’s conclusions, not people from different disciplines. You highlighted his credentials as being presumptively impressive to those in closely related disciplines (ie engineers).
        Similarly, when you state that “there is just no debate” within “our intellectual world” I did not sense a distinction between intra- versus inter-disciplinary debate.
        In addition to criticism of Piketty within the academic economics establishment, the constant sniping among economics bloggers is another example of engagement.

    • dat_bro06

      legit counter example, but inequality has very broad, visceral appeal; the average person/ pol piques up at proposed policy innovation along the lines of a more aggressive redistribution of wealth (ie, “the system is unfair, time to fix”)

      • IMASBA

        This, and any big decision on the subject also has immediate real-world consequences on tax policy and therefore people’s personal wealth.

      • charlies

        So, by implication you are saying that engagement is “missing” from the debate raised by the Mindell book because it does not have real world consequences?
        In that case, I don’t think the missing engagement reflects much dysfunction.
        Your claim may also have hit on the truth: people speculating on the far future to no concrete effect probably don’t engage each other as much as people who have something at stake.

  • Chris Hibbert

    My guess would be that the people who expect automation to be pervasive don’t think that the past is useful in the way that Mindell expects. Of course full automation didn’t happen in any of the fields Mindell surveys: we have’t developed decent AI yet. Once we do, all those fields in which pundits said it was close will tumble quickly, and self-driving cars will join the rush. The lack of automation in the past just isn’t a good argument against its imminent development.

    I haven’t read the book. Robin, if the author had addressed the possible arrival of AI, I’m pretty sure you’d have mentioned it. So the automation proponents you’re talking about seem likely to dismiss the book as another by a technical writer who doesn’t see how rapidly things will change in the near future. Does the history really have much to add to the discussion if it doesn’t consider the possibility and timing of improvements to autonomous agents?

    • http://overcomingbias.com RobinHanson

      A big part of the skepticism is about whether it makes sense to posit an upcoming “arrival of AI” vs. incremental improvements in particular abilities.

      • zarzuelazen

        As far as I can tell, a system consisting of no more than 27 independent expert systems (for 27 domains) achieves full generality. The 27 domains are:

        Aesthetics, Algebra, Analysis/Calculus, Business/Economics, Chemistry, Communication/Art, Concept Learning, Data Communications/Networking, Data Modeling, Decision Theory, Discrete Math, Engineering, Field Theory/Physics, Geometry, Information Theory, Normative Ethics, Operating Systems, Probability Theory, Psychology, Robotics, Sociology, Programming, Solid State Physics, Symbolic Logic, Thermodynamics/Mechanics, Virtual Reality, Virtue

        Now for sure, it is no trivial matter to design 27 expert-systems powerful enough to handle each of these domains, but it’s not a task that is hugely complex either. It’s a small-enough number of domains that I think a single team could do it.

      • zarzuelazen

        I’m going to post a few big key conjectures here relevant to artificial intelligence, in an attempt to greatly hasten the arrival of FAI (Friendly Artificial
        Intelligence). In total, I make 6 conjectures.

        Conjectures:

        (1) For data modeling, you only need 3 levels of recursion for full reflection, and no more:
        Object, Meta- and Meta-Meta. Remember: 3
        levels and no more.

        (2) The structure of knowledge itself is fractal! The above insight about 3-levels of description applies across all levels of abstraction, and across multiple domains!

        For example: start at the very highest level of
        abstraction about reality. Now perform
        decomposition of the structure of knowledge.
        Search for 3 general levels of description, and apply recursive data modelling; this yields 3 core domains: Math, Physics, Minds – each new
        knowledge domain has a direct correspondence to its logical analogies: Object, Meta, and Meta-Meta. Apply 3 recursions: you will finish with the 27 domains (3^3) I listed. At this point, your data modelling is giving you enough information to create FAI!

        To solve AI alignment, you need the following four key conjectures (which are implied by the results of the data modelling described above):

        (3) The notion of ‘utility’ needs to be generalized.
        Remember the 3-levels of recursion needed for reflection. It applies to the knowledge domain ‘Intelligent Agents’ itself! The implication is; Decision Theory is superseded by Information Theory, and the correct (most fundamental)
        measure of value is not ‘Utility’, it is ‘Complexity’.

        (4) The notion of ‘probability’ needs to
        generalized. Remember the 3-levels of
        recursion needed for reflection. It
        applies to the knowledge domain ‘Logic’ itself! The implication is; Probability Theory is
        superseded by Category Learning (Categorization) and the correct (most
        fundamental) measure of truth value is not ‘Probability’, it is ‘Coherence’.

        Two more big conjectures are needed to be able to find the general forms of information
        theory and category learning (and hence to find the required new measures of ‘Complexity’ and ‘Coherence’).

        (5) Information Theory to date is seriously incomplete, because most proposed measures of ‘complexity’ to date have only been applied to static data (information). The key insight is to find a new generalized complexity measure that
        applies to dynamic processes (knowledge). The key insight is that knowledge is not a thing, it is a process! (the correct representation is as ‘programs’, not ‘data’).

        (6) Category Learning to date is seriously incomplete, because most proposed measures of ‘similarity’ to date have only been applied
        to static data (features). The key insight is a find a new generalized similarity measure that applies to dynamic processes (models). The key insight is
        that ‘concepts’ are not things, they are processes! (the correct representation is as ‘programs’, not ‘data’).

      • zarzuelazen

        I’m adding just a few additional remarks on artificial intelligence.

        At long last, I now fully understand what all the artificial intelligence researchers are missing!

        You need 3 levels of description (3 levels of recursion) for a logic powerful enough to achieve full reflection, and current best theories
        of epistemology (i.e Bayesian reasoning), only carry us to the 2nd level.

        The 3 levels are a consequence of going meta- and applying reflection to the knowledge domain ‘Logic’ itself. By this I mean that the structure of logic is hierarchical, and the hierarchy is based on ‘level of abstraction’. Each level has an associated ‘measure’ of truth-value, listed below:

        1st level: Boolean logic (True/False)

        2nd level: Probability value (0-1)

        3rd level:
        Conceptual coherence (categorization measure)

        I intuitively realized long ago that Categorization
        superseded Probability Theory. But, I didn’t
        quite have the right measure of truth-value – for a long time I thought the correct measure was a ‘similarity’ metric (based on analogical inference), but that wasn’t quite right.
        The notion of ‘similarity’ needed to be generalized to deal with dynamic (causal) processes, not just static data features. Once that
        is done, the correct measure of truth-value turns out to be ‘conceptual coherence’, which I define thusly;

        *Conceptual coherence:
        The degree to which a concept coheres with (integrates with) the overall world-model.

        All statements of the form
        ‘outcome x has probability y’
        can be converted into statements about conceptual coherence, simply by redefining ‘x’ as a concept in a world-model. Then the
        correct form of logical expression is:
        ‘concept x has coherence value y’.
        Probability values are just special cases of coherence (the notion of coherence is more fundamental than probabilities).

        I have finally cracked the answer to the question
        that other logicians have failed to answer for hundreds of years: ‘What are probabilities anyway?’!

      • zarzuelazen

        Key passages supporting the notion that ‘coherence’ is more fundamental than ‘probability’:

        Source: ‘Internet encyclopedia of philosophy’, ‘The Theory-Theory of Concepts’, Section (2) (a) Origins of the view

        “Fourth, Theory-Theory is often motivated by the hypothesis that certain concepts (or
        categories) have a kind of coherence that makes them seem especially non-arbitrary (Murphy & Medin, 1985; Medin &
        Wattenmaker, 1987).”

        “Insofar as these explanatory relations among properties are represented, concepts
        themselves are more coherent, reflecting our implicit belief in the worldly coherence of their categories.”

        “Theories are the conceptual glue that makes many of our everyday and scientific concepts
        coherent

  • Elliot

    (Copying this from my FB response for those who don’t follow you on FB..)

    I have not read the book, but I’ve listened to Mindell’s Econtalk episode, and read another interview with him. Ideally I would read the book, but I don’t have infinite time and what I heard in the interview didn’t make me think the book would be worthwhile. There are two big things that he gets wrong in those interviews:

    (1) His doesn’t seem to appreciate the revolution happening right now with machine learning / deep learning. His AI experience appears to be heavily weighted toward symbolic AI, aka “good old fashioned AI” / GOFAI, where researchers tried to create intelligence mostly by writing code. This has never worked very well. For now, machine intelligence arises through data, not code. Mindell’s model of how self driving cars will work is highlighted in the econtalk interview when he says “And to claim that the person who thought the problem through, again, years in advance from the comfort of a cubicle or testing lab somewhere had imagined every possible scenario and perfectly pictured every possible thing that can happen, is just a false claim.” This isn’t how self driving AI is being developed. You don’t have some guy coding up detailed rules about when the AI should take certain actions, you have very general machine learning algorithms / goals / constraints and lots of data, and more new data continuously collected as self driving vehicles operate which gets fed back into regularly updated machine learning models.

    Mindell talks about a situation where a stop sign gets knocked over which he thinks would confuse an AI more than a human. Yet a future AI system would know that a stop sign should have been there if some vehicle using that AI system had ever been at that intersection before. This gives the AI a huge advantage over a human because for a human to know something is wrong, that specific human would need to have experience with that intersection. The AI will can more easily pay attention to more details, like the absence of stop signs on the perpendicular road, and realize something is wrong. It’s not that hard for an AI to learn to look for such things every time it goes through an intersection, given enough data. How often do humans driving through intersections check for this?

    Mindell’s argument is basically that since GOFAI didn’t work very well in the past, AI will never work that well in the future (at least for a very long time), despite rapid advances in machine learning.

    (2) For some reason David keeps asserting that AIs have to be perfect to be adopted. For instance, from the econtalk interview:

    “as long as that driverless car works perfectly under all conditions, everywhere all the time”

    “technology that has no possibility of failing..”

    “that approach is an approach where you have to solve the problem 100% perfectly to do it at all”

    This makes no sense. All that is required for us to adopt fully autonomous vehicles is that the AI is as good as human drivers (or close enough).

  • BJ Terry

    Perhaps you are getting at the same thing, but it could be that the lack of public debate on this topic simply mirrors the structure of the institutions that are making decisions. Within a company that is considering automated driving efforts, there is certainly debate about whether projects are worthwhile to pursue, but there is no reason for there to be a public debate about it because the public has no say in these decisions at this stage. On the other hand, when a topic has to do with public policy, there is more extensive public debate on the topic, so long as there are large enough groups on each side.

    Academic discourse is potentially similar. As long as you don’t have to compete directly for funding with a research team out there in the world, you can both just do your own thing even though it conflicts. The more clearly they have to compete and the more clearly their ideas conflict, the more likely it is that there is engagement. In fields where conflict is more obvious, you would expect more engagement (e.g. what is dark matter made of) than in fields where conflict is not obvious (e.g. various deconstructions of Shakespeare can all be valid within the philosophy of deconstruction).

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      Intellectualism atrophies because funders replace adherents.

  • lump1

    This is a comment only about the “extreme environments” analogy, not about Mindell’s book:

    We should notice that in the extreme environments cited, the planes/submarines/probes are either one-off systems or items of very limited production runs. They also do reasonably high-stakes work, the sort of work that justifies assigning a trained and salaried *expert human* for near-constant oversight.

    Autonomous cars, on the other hand, would have comparatively huge production runs, which could pay back the initial cost of developing sensors and software that would allow for full autonomy. I’m saying that the spacecraft and submarine people didn’t have much of a reason to attempt full autonomy, so it’s no wonder they didn’t achieve it. Even highly autonomous factories still have some people working there. Why not automate their jobs too? It’s not necessarily because we can’t. It’s that doing the work of automating the jobs of those twelve people would cost more than the twelve salaries. If it’s not a question of twelve but 12,000, the matter may be worth revisiting.

    Autonomous cars will also not have the luxury of a highly trained human pilot to expertly take over for them. This fact makes them unlike anything discussed in the analogy. Why not instead use the dishwasher as an analogy? Why am I not given the option for “expert interactivity” for more optimal pan degreasing results? Instead, I just shut the opaque door and wait for the beep. That’s exactly what full automation looks like once we’ve had time to adjust. Mindell might be smart, and he might even be right, but this “extreme environments” analogy had better not be his best argument.

    • stevesailer

      The elevator would be another example.

      I gather that Google is pondering a car that would be automated for commuting on the road between San Francisco and Silicon Valley but manual if want to drive to the mall.

    • TheBrett

      Good point about the dishwasher, although it does require human interactivity – it needs you to put in the dishes and soap, and put the clean dishes.

      • https://www.facebook.com/app_scoped_user_id/1026609730/ Jim Balter

        Likewise you have to put yourself and your luggage into and out of your automated car. Talk of “full automation” is meaningless without defining the boundaries.

  • TheBrett

    “This Time It’s Different” is pretty much a standard belief among those who believe that automation is going to lead to mass joblessness. Kevin Drum over at Mother Jones says it every time someone points out the history of automation and mechanization, and Brad DeLong says something similar. It’s very irritating because it can’t really be refuted anymore than someone saying “this time it’s different” in response to someone criticizing their belief that the Moon will turn into blue cheese tomorrow.

    In the mean-time, I’ll just stick with what’s been shown by history: automation replaces particular human tasks and even human jobs, but it doesn’t reduce overall human employment unless it’s literally wrecking people’s lives (i.e. a high rate of folks in the jobs are becoming disabled and unable to work anymore).

  • JW Ogden

    Having to drive cars is much more costly in life, time and money than having to pilot aircraft and space vehicles, so much more can be spent on solving it.

  • David Mindell

    I am the author and I appreciate Robin’s column, and also am eager for an informed debate with proponents of full autonomy. Somehow people think that because I’m an historian, that the book is making an historical argument — it’s actually about how people (including me) work with robotics and automation today, in extreme environments, and what we can learn from them. As one reader comments below, these tend to be places with expensive systems and expert users, and that point is addressed in the book. Also, one comment below refers to my comments about driverless cars working perfectly all the time — that’s not my argument, that’s the argument of proponents of having no possibility for human input into the car at any moment. Anything less, and we need to think about handoffs, which are worthwhile to work on. The book is not about GOFAI, and addresses current work in machine learning etc. but these systems are still heavily structured by human intentions in ways the book goes into in detail. By the way, I have had these conversations in private with important proponents of full autonomy in driving, and have invited them to debate in public but they have so far declined. The book makes no predictions about the future; it simply says if we look closely at how we use robotics in real environments, then there are lessons to be learned for design as we move forward. The real argument to be had is whether the mass of empirical examples in the book illustrate fundamental phenomena or whether “this time is different.” As an engineer, one likes to think that data about the world carries at least as much weight as faith in notions of “progress,” which we know are ever changing and hard to predict.

    • Michael Vassar

      I’m a proponent of full autonomy in driving, though I’m certainly primarily known for my position on AGI. I would be happy to debate in public if you’re interested, though I’d probably prefer to simply discuss, as debate tends to be fairly pointless.

      • https://www.facebook.com/app_scoped_user_id/1026609730/ Jim Balter

        It’s certainly pointless if one is not willing to be intellectually honest, to allow being shown wrong, and to abandon one’s position.

    • Elliot

      RE: driverless cars working perfectly:

      The argument that you make several times in the econtalk podcast is that fully autonomous cars would need to be perfect for full autonomy to be a viable option.

      When Russ asked you about the approach that Google is taking to self driving cars, you said “that approach is an approach where you have to solve the problem 100% perfectly to do it at all.”

      Those in favor of fully autonomous self driving cars are not making the argument that they in fact will be perfect, we’re making the argument that they don’t need to be perfect. They just need to be “good enough”, because humans are not perfect. The appropriate comparison is to human drivers, not to perfection.

      In 2013 there were 32,719 motor vehicle deaths in the US according to Wikipedia. Imagine that switching to fully autonomous cars would lead to only 100 deaths per year. In this case they’re not perfect, but it’s still clear that we’d want to switch to them.

      RE: GOFAI

      I’m not saying that you’re intentionally confining your argument to GOFAI, just that your arguments are what I’d expect from someone whose AI background was primarily in GOFAI. If you have arguments that apply to modern machine learned systems I’d love to see them. I haven’t found you making these arguments in interviews.

      RE: the vastly different economics of self driving cars vs. underwater robots or spacecraft:

      I’d be curious to see you argue that this shouldn’t be a strong argument against the relevance of the examples you cite. You say it’s in your book, but I think if you want people to take your argument seriously enough to read your book you should offer some accessible comment on this.

  • Pingback: Links: Books, energy, Ferrante, spying, housing, coffee, dignity and more! « The Story's Story

  • Pingback: Praxtime 2015 year end review. My favorites in science, tech, econ, pop culture. Grading and making predictions. | Praxtime by Nathan Taylor

  • free_agent

    In regard to “fully autonomous” automobiles as compared to aircraft, submarines, etc., the *market* is different: Eliminating the pilot from a commercial flight isn’t going to reduce the cost tremendously, so there’s no incentive to come up with socially tolerable ways to to so, even though it’s technically possible now. But making it possible for all the people who aren’t capable of driving to travel driving-level distances will be enormously valuable. This suggests that the payoff (both to vehicle manufacturers and the social structures) will be high enough to cause the deployment of fully-autonomous vehicles.

  • andremp

    I’d like to offer one possible reason that people are choosing not to engage this argument: valuations.

    For a company like Uber its valuation is dependent on the belief that full automation is a near-term possibility. Mindell’s arguments, if they got traction, would call into question that valuation. The amount of money at stake is shifting the discussion to the domain of marketing.

    A general rule that market leaders follow is it to never compare their product to a competitor’s, because doing so might introduce the competitor’s product to more people. Right now the “market leading” idea is one of full automation, and its proponents are choosing not to engage Mindell’s arguments lest they get any additional exposure.

  • Rafal Smigrodzki

    Claims to the uniqueness and irreplaceability of human cognition have been advanced repeatedly over the last century but progress in computer science narrowed the domains where humans still reign. Just as religionists have their “God of the gaps”, so defenders of humanity have their “Human in Charge”.

    I have not read the Bible and probably I will not read Dr Mindell’s book. I admit to being somewhat narrow-minded.

    • zarzuelazen

      Yes, after reading up on the work done in AI this year, I’ve drastically shortened my estimated time to AGI. At the beginning of the year I still thought it was 40-50 years away. Now I believe that it could be as little as 5 years away!

      The ‘big boys’ have moved in and they’re throwing billions at AI. You have Elon Musk and co. putting up1 billion, you have the billions being throw at machine learning by Facebook and Google, who appear to be racing each other. The US Military (DARPA) has now moved in as well, throwing 15 billion at machine learning research!

      According to the FLI report by Richard Mallah; “Those in the field acknowledge progress is accelerating year by year”. Simple architectures in deep learning are matching or beating human performance across multiple domains.

      Until the end of this year, skeptics could still have argued that deep learning has serious limitations (for instance: it needed thousands of training examples to work). However, the recent work in human concept learning demonstrates that a different approach to machine learning called Bayesian Program Learning (BPL – where concepts are represented as probabilistic programs) can overcome all the limitations of deep learning, and can match or beat human concept-learning performance, learning from single training examples just like humans do.

      AGI is going to be here in a few years at the most, not the decades most experts thought.

  • RD457FF22H

    Air travel can be completely automated now. The reason a human has to be in the chain is the lack of a safe failure mode for the automation. It’s hard to imagine completely overcoming that issue with cars, but maybe it’s possible.
    The “other side” ignores the failure mode issue because what doesn’t happen doesn’t make the news. Even though automation failures that would result in loss of the aircraft, are quite common, they’re no big deal as long as there’s a human there to take over.

  • Pingback: Debate is not about Debate | askblog

  • Mark Bahner

    I haven’t read the book, but I’d be happy to bet David Mindell that a Level 4 automation car will reach mass production (>10,000 vehicles per year) prior to 2030.

    I’ll even give him 2-to-1 odds on a bet of up to $100 (because I think the actual time of achievement will be more like 4-10 years).

    I’ll write more as I have time.

  • Mark Bahner

    Here are a few reasons I think that full automation, rather than partial automation, is the end-point for automobiles:

    1) Fully automatic vehicles can drive at very high speeds with very close following distances. This can be accomplished because large numbers of fully autonomous vehicles can communicate with one another and have much faster reaction times than humans. If something goes wrong in that situation, humans would be unlikely to make things better.

    2) Fully autonomous vehicles should also allow cars to pass at 90 degree angles at intersections without stoplights. Humans would also be unlikely to improve the situation if they intervened in that instance.

    3) Fully autonomous vehicles will allow transportation-as-a-utility, eliminating the need for vehicle ownership, and greatly reducing the cost per mile traveled (because one vehicle could easily be on the road 10+ hours per day). If humans stay in the loop, this will be less likely, because no one wants to lend their car to an unsafe driver.

    4) Vehicle autonomy will improve at “Moore’s Law” rates, whereas humans will never be substantially better drivers. In approximately 8 years, a computer costing $1000 will be capable of a speed of approximately 1 petaflop. And 10 years later, the performance will be close to 1000 times better. Similarly, cars with memories of terabytes and then petabytes will be available in less than two decades. Humans capabilities will become so inferior to computers that it won’t make any sense to have ever have human control.

    I’ve ordered David Mindell’s book, but I don’t expect his arguments for limited autonomy to overcome the many reasons why full computer control will be superior.

  • Mark Bahner

    Regarding (comparative!) safety, and David Mindell’s comments on Econtalk:

    Russ Roberts says: “…You are suggesting that that tradeoff will never be attractive–I think you are suggesting that tradeoff will never be attractive enough to give up full autonomy. And I think what Google and Tesla and others, and to some extent Uber are betting on is that we’ll get so close that we’ll save so many lives that it will be a huge improvement.”
    David Mindell responds: “Yeah. You know–there’s no evidence that we’re going to save lives yet. There may well be. But again, we know a lot about accidents. We know a lot about aviation accidents and we know a lot about car accidents. And it is indeed true that a high proportion of the lives lost and the accidents in automobiles are caused by human error. But what we know a lot less about is how people drive under normal circumstances. And people are extremely good at sort of smoothing out the rough edges in these systems: the stop sign maybe is knocked over or a traffic light isn’t working; and people have a way to kind of muddle through those situations.”
    The statement, “There’s no evidence we’re going to save lives yet” is pretty silly. There’s *abundant* evidence that vast numbers of lives will be saved. There are 30,000+ road-related fatalities every year in the U.S. alone, and more than 10,000 fatalities are due to driving under the influence. So those are 10,000+ fatalities that could be avoided by fully autonomous vehicles.
    For every life that could obviously saved by full autonomy, David Mindell needs to postulate one or more fatalities that would be caused by fully autonomous vehicles versus vehicles partially or fully driven by humans.
    I know of no one else who studies autonomous vehicles who thinks they will not save lives. When virtually everyone disagrees with a person, the burden of proof is on that person to explain why virtually everyone else is wrong. I’ve order David Mindell’s book, but from his Econtalk interview, I doubt he will be able to meet this burden of proof.

  • Mark Bahner

    “But looking over dozens of reviews Mindell’s book in the 75 days since it would was published, I find no thoughtful response from the other side!”
    Where are the dozens of reviews (that contain no thoughtful responses ;-))?

  • Pingback: Linky Friday #154: Whisky, Sexy, Freedom | Ordinary Times

  • Mike Powers

    “I don’t think there’s any way for complex systems to be automated,” he said, sitting at a traffic light and waiting for it to change.

    “Clearly there comes at point at which you simply *have* to have humans in the loop”, he said as he hit the accelerator and his automatic transmission shifted.