Regulating Self-Driving Cars

Warning: I’m sure there’s a literature on this, which I haven’t read. This post is instead based on a conversation with some folks who have read more of it. So I’m “shooting from the hip” here, as they say.

Like planes, boats, submarines, and other vehicles, self-driving cars can be used in several modes. The automation can be turned off. It can be turned on and advisory only. It can be driving, but with the human watching carefully and ready to take over at any time. Or it can be driving with the human not watching very carefully, so that the human would take a substantial delay before being able to take over. Or the human might not be capable of taking over at all; perhaps a remote driver would stand ready to take over via teleoperation.

While we might mostly trust vehicle owners or passengers to decide when to use which modes, existing practice suggest we won’t entirely trust them. Today, after a traffic accident, we let some parties sue others for damages. This can improves driver incentives to drive well. But we don’t trust this to fully correct incentives. So in addition, we regulate traffic. We don’t just suggest that you stop at a red light, keep in one lane, or stay below a speed limit. We require these things, and penalize detected violations. Similarly, we’ll probably want to regulate the choice of self-driving mode.

Consider a standard three-color traffic light. When the light is red, you are not allowed to go. When it is green you are allowed, but not required, to go; sometimes it is not safe to go even when a light is green. When the light is yellow, you are supposed to pay extra attention to a red light coming soon. We could similarly use a three color system as the basis of a three-mode system of regulating self-driving cars.

Imagine that inside each car is a very visible light, which regulators can set to be green, yellow or red. When your light is red you must drive your car yourself, even if you get advice from automation. When the light is yellow you can let the automation take over if you want, but you must watch carefully, ready to take over. When the light is green, you can usually ignore driving, such as by reading or sleeping, though you may watch or drive if you want.

(We might want a standard way to alert drivers when their color changed away from green. Of course we could imagine adding more colors, to distinguish more levels of attention and control. But a three level system seems a reasonable place to start.)

Under this system, the key regulatory choice is the choice of color. This choice could in principle be set different for each car at each moment. But early on the color would probably be set the same for all cars and drivers of a type, in a particular geographic area at a particular time. The color might come from in part a broadcasted signal, with the light perhaps defaulting to red if it can’t get a signal.

One can imagine a very bureaucratic system to set the color, with regulators sitting in a big room filled with monitors, like NASA mission control. That would probably be too conservative and fail to take local circumstances enough into account. Or one might imagine empowering fancy statistical or machine learning algorithms to make the choice. But most any algorithm would make a lot of mistakes, and the choice of algorithm might be politicized, leading to a poor choice.

Let me suggest using prediction markets for this choice. Regulators would have to choose a large set of situation buckets, such that the color must be the same for all situations in the same bucket. Then for each bucket we’d have three markets, estimating the accident rate conditional on a particular color. Assuming that drivers gain some direct benefit from paying less attention to driving, we’d set the color to green unless the expected difference between the green and yellow accident rate became high enough. Similarly for the choice between red and yellow.

Work on combinatorial prediction markets suggests that it is feasible to have billions or more such buckets at a time. We might use audit lotteries and only actually estimate accident rates for some small fraction of these buckets, using bets conditional on such auditing. But even with a much smaller number of buckets, our experience with prediction markets suggests that such a system would work better than either a bureaucratic or statistical system with a similar number of buckets.

Added 1p: My assumptions were influenced by the book Our Robots, Ourselves on the history of automation.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • http://www.jessriedel.com Jess Riedel

    I know this is side-stepping the intellectual exercise, but do you think self-driving cars will be in the skill range (good enough drive in some situations, but significantly worse than humans in others) where this scheme is useful? If it’s less than, say, a decade, this probably won’t be built before its obsolete.

    Yes, automation systems have been improving slowly over the past decades, but once training data is continuously streaming in from 100 million cars the improvement rate should jump. I could see the transition period be anywhere from 5 to 30 years.

    • http://overcomingbias.com RobinHanson

      Our Robots Ourselves argues that planes, submarines etc have been within this intermediate range for a long time.
      http://www.overcomingbias.com/2015/12/missing-engagement.html

      • http://www.jessriedel.com Jess Riedel

        Fair enough, although note that metro systems have exceeded it (tech-wise, if not actually implemented for institutional reasons). https://en.wikipedia.org/wiki/List_of_automated_urban_metro_subway_systems

        In some aspects, cars are closer to metro systems than to planes and submarines because cars run on dedicated tracks.

      • sflicht

        I remember that blog post, and I wonder if anyone has stepped up to the plate with a serious response. For example, planes, subs etc are complicated and piloted by highly trained experts. These characteristics are not necessarily shared in the case of normal car-driving automation, so it’s not clear to me that the insights from those examples generalize to the case at hand.

        Personally when it comes to the superiority of self-driving cars versus human drivers, I don’t think “rapid improvement in automation” is a necessary condition. We might already be at the point where they’re superior for 90% of drivers in 90% of cases, which means that as a first approximation “green light all the time” could be a pretty good policy.

      • JW Ogden

        Another thing to consider is that the market for self driving cars may be bigger than the expense of pilots for the other things. Cost of everyone’s time driving is huge so you could spend more money on automating driving and it still save more than it costs. How much is spent on pilots for example, that would be the maximum savings from complete automation of flights.

  • arch1

    Accidents that matter a lot are pretty rare. Would this fact affect the design of such a system?

    • http://overcomingbias.com RobinHanson

      You’d want the system to have a sufficient scale so that this risk didn’t overwhelm speculators. Averaging over enough traffic, the variance of accidents should be tolerable enough.

  • free_agent

    It sees like it would be far simpler to implement a system where your insurance company sets what color the light is. And it certainly gives them the right incentives!

    Also, you don’t mention that the color of the light has to be visible from outside the car, so that the police can enforce your obedience to it.

    • http://overcomingbias.com RobinHanson

      Your insurance firm doesn’t internalize all the effects of your driving on others.

  • Robert Koslover

    Does politics cause problems with our traffic signals currently? Seems to me that one could follow a process similar to that used in adopting industry standards, such as the IEEE standards. These can include committees and solicited comments, if needed. One does not necessarily need prediction markets, nor government, to get a bunch of people to agree on a common standard, for the convenience of all parties. And such standards can be (and often are) updated, if they are not sufficiently convenient.

  • Tristan Slominski

    Here is a paper on how this system fails. Basically, the vehicle-human system is likely to converge on a state where humans are incapable of taking over driving functions when the vehicle turns the light red: https://www.ise.ncsu.edu/nsf_itr/794B/papers/Bainbridge_1983_Automatica.pdf

  • http://don.geddis.org/ Don Geddis

    You seem to be assuming that the accident rate for human-operated cars is less than for automated driving. And so the question is, how much “worse” can the automated control be, and still have society “allow” the human to back off.

    But I wonder. There are lots of cases of bad drivers: drunk drivers late at night on New Year’s Eve, sleep-deprived drivers just trying stay awake to their next stop, old drivers unwilling to give up their keys, etc. Perhaps these can be some of your “buckets”, but some of these categories may involve non-public information (how much has the old person’s physical abilities degraded? how much sleep have you gotten recently?).

    I suspect that the real societal problem, is one of morality and responsibility. Even if the automated car is factually safer (fewer fatal accidents, on average) … the problem is that humans excuse the mistakes of other humans, but they won’t excuse the mistakes of cold, unthinking, machines. When an automated car saves the life of its passenger, by driving on the sidewalk and plowing into an innocent child pedestrian, civil society will be outraged and demand that the evil machines that chose to kill a human be outlawed.

    The real problem is when the machines are safer than the humans, but still not perfect. Humans demand far more perfection from machine control, than they do from human control.

  • sflicht

    For this specific problem, I’m not confident that prediction markets would outperform a statistical system. An optimized statistical model would take into account idiosyncracies of the vehicle itself and information from the sensors. If manufacturers were mandated to provide all this info to traders, then eventually sophisticated market participants would take advantage of it and incorporate it into the PM prices. But adding this extra layer of regulatory burden is itself a political obstacle (manufacturers would rightly be protective of their trade secrets etc, and scared of looking bad). And the usual problem of liquidity (getting the market off the ground) is magnified in this case, since the noisy signal from a low-liquidity market without sophisticated quant traders will be vastly inferior to the statistical modeling approach applied directly to the vehicle data.

    Plus of which, I think most people who’ve thought about it have a strong prior that deaths would be minimized if the light is green all the time. It will take a while to accumulate enough statistical evidence to overwhelm this prior. In the meantime, the status quo solution of letting the normal bureaucratic processes trundle along seems fine. Eventually when the evidence is in (most likely confirming the prior) there will be reliable information with strong moral authority that will hopefully nudge the lawyers to implement the right policy.

    The big moral question about this issue, in my view, surrounds the avoidable deaths that could be prevented by coordinating a rapid transition to universal, mostly-mandatory computer driving. While the government is in principle the entity capable of solving this sort of coordination problem, in practice it’s clearly not capable of doing so. (Some governments, like Hong Kong’s or Singapore’s, might be. It will be interesting to see if they do.) So it seems reasonable to me not to worry too much about it. We’ll eventually get to some OK equilibrium I suspect.

  • dat_bro06

    If there were a prediction market to predict when prediction markets might be adopted in the future, the case of self-driving cars would trade at 0%. (because regulators are process mules, not innovative policymakers)

  • JW Ogden

    I wonder if you could very quickly get to the point where all interstate highways are always all green. That would satisfy me. To me driving around town is less boring.

  • http://ideas.4brad.com Brad Templeton

    There are a variety of issues with this proposal. It parallels the mistaken taxonomy of NHTSA’s “levels” which focus entirely on how much human supervision is needed for a vehicle. Some (including Google) reject all the levels except for the top one, where no supervision is needed and the vehicle can run unmanned (and even has no steering wheel.)

    Some have rejected the idea of standby supervision but like constant supervision (such as Tesla Autopilot, when used as instructed.) In supervised vehicles, there are no green roads, ever, or it’s very dynamic (ie. it allows unsupervised operation in a traffic jam on the highway.)

    For vehicles able to do unmanned operation, which is where the vast majority of the value and social change comes, and where the most serious teams are all working, there are no humans with time to bid in a prediction market; they are either not in the car or use this car specifically to not worry about driving issues.

    Indeed the mental cost of this is far too high and reward too random. It’s hard to imagine more than a small fraction of people having time and resources to notice that others have overestimated or underestimated safety and technlogical capability on each segment of road and bet against them.

    I also would be skeptical of people buying shares based on accident rates. You don’t want to be right. People are also notoriously bad at judging road safety, people gamble their lives on road safety evaluations and routinely fail, gambling money may not do better.

    Currently, measuring the safety of these cars in different situations is an unsolved problem, but not in the sense of “could be solved by a market.” People don’t even know what metrics to use and what goals to attain. The liability falls upon the creators of the cars, and they must assure their safety as they have far more to lose than those betting in prediction markets.

    Finally, the hope is that there will not be statutory regulation at all before the technology has shown some maturity. That is the traditional norm in auto regulation, but there is talk of changing it and doing pre-market regulation. This is controversial. My personal view is it would be disasterous. The tort sytem is more than enough incentive to be safe for now.

    Developers of these cars are inventing new and breakthrough technologies to make them reliable and safe to levels never before seen in this class of engineering. Regulations (and prediction markets) can only express conventional wisdom and existing practice, they are poor ways of examining revolutionary and novel paths to safety.

    • http://overcomingbias.com RobinHanson

      I did say there’d be one (or more) market per person driving! If we wanted, we could only use a few dozen situation buckets, defined by a few features like day/night, sun/rain/snow, freeway/other. And OF COURSE markets could estimate accident rates in such buckets. In fact, they could do so as well as any competing mechanism.

      • Brad Templeton

        Markets would only estimate well if there are enough interested and motivated parties with access to knowledge that is very local. Of course we do have markets that judge accident rates already, they are called insurance companies, particularly mutual ones, and in spite of the fact there is huge, huge money to be made ($200B industry in USA) making better estimates, the most they have done is attempt to measure general driver quality with small devices in your car.
        But insurance companies are screwed with self-driving cars, as they will always know much less about the risk of the cars than the makers of the cars, whose job it is to understand and minimize the risk. As I describe at http://robocars.com/accident.html the types of accidents robocars have will be quite different from human causes, and unlike human mistakes, no cause will repeat because the developers will immediately make and push fixes to it. Human accidents are much more stochastic. Of course, mistakes (or misunderstood situations) by programmers may have some predictable aspects to them, but it’s quite different from today’s approaches.

        But this is just one problem. Because the liability for accidents will fall on the developers, and this cost will dominate, they will be the ones deciding where there car will operate, and under what conditions. For many years to come their approach will be paranoid, so even if the markets can tell them a better estimate of risk for a given situation, they can’t and won’t use that information (other than in deciding where to focus effort on improvement.) They don’t want to be in court saying, “We allowed the car to go in full automatic mode because the prediction markets suggested low risk in this situation, but they were wrong and oops, sorry about your husband.” The juries will punish that.

      • http://overcomingbias.com RobinHanson

        I don’t at all see why market speculators need very local knowledge to estimate the average accident rate of, for example, summer rainy nights on freeways when humans are watching but not driving. Yes of course automation makers may also add constraints on when their products can be used. My post is about additional regulation that may be imposed.

      • http://ideas.4brad.com Brad Templeton

        I had thought you wanted local evaluation. Yes, people could make evaluations of risk on broad classes of road, but developers will be constantly revising their tools, and reacting to not just all accidents, but all anomalies. So every day the answer will be different for each brand of car on different roads. This is one reason that generally is desired that deciding what vehicles can do on what classes of road is not something that would be regulated by the state directly. As such there is also no desire to have the state regulate based on information from markets. The general process today is the state, once it figures things out, creates functional safety standards, and manufacturers self-certify that they meet those standards, and then can be sued for more damages if they self-certified fraudulently.

      • http://overcomingbias.com RobinHanson

        As with speed limits and stoplights, regulation is usually crude relative to the details that products and customers consider. Even so, we often do approve such regulation.

      • http://ideas.4brad.com Brad Templeton

        That’s old-world-thinking regulation, created because people can’t be trusted, and because when one person makes an unsafe action and learns, it doesn’t mean the whole world learns. Not so for computers. When one car makes an error, it will get fixed and all the cars will never make that error again. In addition, you don’t need a vehicle code because you can get the developers coding all the cars into a virtual room and just discuss what makes the most sense to deal with any on-road issue.

      • http://overcomingbias.com RobinHanson

        Even with new world computers, I predict there will still be crude rules like speed limits, rules that don’t take many context details into account.

      • http://ideas.4brad.com Brad Templeton

        Indeed — so you propose the speed limit be set by markets, not highway engineers, politicians and car engineers? Of course in Germany they don’t regulate the speed and have a better safety record than the USA

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        This is the key comparison – prediction markets against panels of experts – that I’ve never seen addressed on OB.

      • Brad Templeton

        Prediction markets are very good at predicting things involving humans. They can also predict non-subjective things but are less valuable there. A prediction market on what you would get if you divide the circumference of a circle by the diameter would give an excellent result, but only because everybody knows the mathematicians already figured that out.

        Another interesting issue is the known bias of the crowd to the less safe. (In contrast with the bias of experts to the too safe.) There would be a slight bias in the market for a higher speed limit, not because it’s safer, but because I personally want to get there faster and have my own bad evaluation of the risk of speeding to do so.

      • http://overcomingbias.com RobinHanson

        Prediction markets are neither simple crowds nor simple experts; they instead lean to be influenced by whichever is more accurate in each case.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        The question is whether it is more efficient to subsidize prediction markets or pay a panel of experts.

      • http://overcomingbias.com RobinHanson

        Yes, if we could cheaply change speed limit signs I would prefer they be set by markets.