40 Comments

"I don't think there's any way for complex systems to be automated," he said, sitting at a traffic light and waiting for it to change.

"Clearly there comes at point at which you simply *have* to have humans in the loop", he said as he hit the accelerator and his automatic transmission shifted.

Expand full comment

So, by implication you are saying that engagement is "missing" from the debate raised by the Mindell book because it does not have real world consequences? In that case, I don't think the missing engagement reflects much dysfunction. Your claim may also have hit on the truth: people speculating on the far future to no concrete effect probably don't engage each other as much as people who have something at stake.

Expand full comment

"But looking over dozens of reviews Mindell’s book in the 75 days since it would was published, I find no thoughtful response from the other side!"Where are the dozens of reviews (that contain no thoughtful responses ;-))?

Expand full comment

Regarding (comparative!) safety, and David Mindell's comments on Econtalk:

Russ Roberts says: "...You are suggesting that that tradeoff will never be attractive--I think you are suggesting that tradeoff will never be attractive enough to give up full autonomy. And I think what Google and Tesla and others, and to some extent Uber are betting on is that we'll get so close that we'll save so many lives that it will be a huge improvement."David Mindell responds: "Yeah. You know--there's no evidence that we're going to save lives yet. There may well be. But again, we know a lot about accidents. We know a lot about aviation accidents and we know a lot about car accidents. And it is indeed true that a high proportion of the lives lost and the accidents in automobiles are caused by human error. But what we know a lot less about is how people drive under normal circumstances. And people are extremely good at sort of smoothing out the rough edges in these systems: the stop sign maybe is knocked over or a traffic light isn't working; and people have a way to kind of muddle through those situations."The statement, "There's no evidence we're going to save lives yet" is pretty silly. There's *abundant* evidence that vast numbers of lives will be saved. There are 30,000+ road-related fatalities every year in the U.S. alone, and more than 10,000 fatalities are due to driving under the influence. So those are 10,000+ fatalities that could be avoided by fully autonomous vehicles.For every life that could obviously saved by full autonomy, David Mindell needs to postulate one or more fatalities that would be caused by fully autonomous vehicles versus vehicles partially or fully driven by humans.I know of no one else who studies autonomous vehicles who thinks they will not save lives. When virtually everyone disagrees with a person, the burden of proof is on that person to explain why virtually everyone else is wrong. I've order David Mindell's book, but from his Econtalk interview, I doubt he will be able to meet this burden of proof.

Expand full comment

Here are a few reasons I think that full automation, rather than partial automation, is the end-point for automobiles:

1) Fully automatic vehicles can drive at very high speeds with very close following distances. This can be accomplished because large numbers of fully autonomous vehicles can communicate with one another and have much faster reaction times than humans. If something goes wrong in that situation, humans would be unlikely to make things better.

2) Fully autonomous vehicles should also allow cars to pass at 90 degree angles at intersections without stoplights. Humans would also be unlikely to improve the situation if they intervened in that instance.

3) Fully autonomous vehicles will allow transportation-as-a-utility, eliminating the need for vehicle ownership, and greatly reducing the cost per mile traveled (because one vehicle could easily be on the road 10+ hours per day). If humans stay in the loop, this will be less likely, because no one wants to lend their car to an unsafe driver.

4) Vehicle autonomy will improve at "Moore's Law" rates, whereas humans will never be substantially better drivers. In approximately 8 years, a computer costing $1000 will be capable of a speed of approximately 1 petaflop. And 10 years later, the performance will be close to 1000 times better. Similarly, cars with memories of terabytes and then petabytes will be available in less than two decades. Humans capabilities will become so inferior to computers that it won't make any sense to have ever have human control.

I've ordered David Mindell's book, but I don't expect his arguments for limited autonomy to overcome the many reasons why full computer control will be superior.

Expand full comment

I haven't read the book, but I'd be happy to bet David Mindell that a Level 4 automation car will reach mass production (>10,000 vehicles per year) prior to 2030.

I'll even give him 2-to-1 odds on a bet of up to $100 (because I think the actual time of achievement will be more like 4-10 years).

I'll write more as I have time.

Expand full comment

RE: driverless cars working perfectly:

The argument that you make several times in the econtalk podcast is that fully autonomous cars would need to be perfect for full autonomy to be a viable option.

When Russ asked you about the approach that Google is taking to self driving cars, you said "that approach is an approach where you have to solve the problem 100% perfectly to do it at all."

Those in favor of fully autonomous self driving cars are not making the argument that they in fact will be perfect, we're making the argument that they don't need to be perfect. They just need to be "good enough", because humans are not perfect. The appropriate comparison is to human drivers, not to perfection.

In 2013 there were 32,719 motor vehicle deaths in the US according to Wikipedia. Imagine that switching to fully autonomous cars would lead to only 100 deaths per year. In this case they're not perfect, but it's still clear that we'd want to switch to them.

RE: GOFAI

I'm not saying that you're intentionally confining your argument to GOFAI, just that your arguments are what I'd expect from someone whose AI background was primarily in GOFAI. If you have arguments that apply to modern machine learned systems I'd love to see them. I haven't found you making these arguments in interviews.

RE: the vastly different economics of self driving cars vs. underwater robots or spacecraft:

I'd be curious to see you argue that this shouldn't be a strong argument against the relevance of the examples you cite. You say it's in your book, but I think if you want people to take your argument seriously enough to read your book you should offer some accessible comment on this.

Expand full comment

Air travel can be completely automated now. The reason a human has to be in the chain is the lack of a safe failure mode for the automation. It's hard to imagine completely overcoming that issue with cars, but maybe it's possible.The "other side" ignores the failure mode issue because what doesn't happen doesn't make the news. Even though automation failures that would result in loss of the aircraft, are quite common, they're no big deal as long as there's a human there to take over.

Expand full comment

Claims to the uniqueness and irreplaceability of human cognition have been advanced repeatedly over the last century but progress in computer science narrowed the domains where humans still reign. Just as religionists have their "God of the gaps", so defenders of humanity have their "Human in Charge".

I have not read the Bible and probably I will not read Dr Mindell's book. I admit to being somewhat narrow-minded.

Expand full comment

I'd like to offer one possible reason that people are choosing not to engage this argument: valuations.

For a company like Uber its valuation is dependent on the belief that full automation is a near-term possibility. Mindell's arguments, if they got traction, would call into question that valuation. The amount of money at stake is shifting the discussion to the domain of marketing.

A general rule that market leaders follow is it to never compare their product to a competitor's, because doing so might introduce the competitor's product to more people. Right now the "market leading" idea is one of full automation, and its proponents are choosing not to engage Mindell's arguments lest they get any additional exposure.

Expand full comment

In regard to "fully autonomous" automobiles as compared to aircraft, submarines, etc., the *market* is different: Eliminating the pilot from a commercial flight isn't going to reduce the cost tremendously, so there's no incentive to come up with socially tolerable ways to to so, even though it's technically possible now. But making it possible for all the people who aren't capable of driving to travel driving-level distances will be enormously valuable. This suggests that the payoff (both to vehicle manufacturers and the social structures) will be high enough to cause the deployment of fully-autonomous vehicles.

Expand full comment

Dunning-Kruger nailed you.

Expand full comment

Does that go for the latest book on astrology, as well?

The fact is that no one here has said anything about wanting to write a critical review of the book ... you're simply misrepresenting what people actually are saying.

Expand full comment

Likewise you have to put yourself and your luggage into and out of your automated car. Talk of "full automation" is meaningless without defining the boundaries.

Expand full comment

It's certainly pointless if one is not willing to be intellectually honest, to allow being shown wrong, and to abandon one's position.

Expand full comment

This, and any big decision on the subject also has immediate real-world consequences on tax policy and therefore people's personal wealth.

Expand full comment