Autonomous Vehicle Pioneer Urmson Talks About Safety and Risks

June 22, 2020 by Jerry Hirsch, @Jerryhirsch

Chris Urmson is a robot scientist really into cars, having earned a Ph.D. in Robotics at Carnegie Mellon University after writing a thesis called Navigation Regimes for Off-Road Autonomy.

He helped launch Google’s self-driving car unit, now spun off as Waymo. Urmson currently heads Aurora Technology, a San Francisco startup developing a robotic driving system for commercial vehicles and passenger cars.

Aurora’s other founders make up a dream team of self-driving vehicle experts, including Sterling Anderson, Tesla’s former head of Autopilot, and Drew Bagnell, a founding member and head of Uber’s autonomous driving division.

Urmson recently participated in a virtual panel on autonomous vehicle development hosted by the Partners For Automated Vehicle Education industry consortium.

Here are his top line thoughts:

  • Safety is not something that you bolt on at the end.
  • Safety delayed is safety denied.
  • It’s not when self-driving cars will be here—it’s how they’ll be here.
  • It is dangerous when advanced driver-assistance systems, or ADAS, are branded as self-driving.
  • This is a transformative moment in transportation.
  • Building and delivering this technology is extremely challenging.
  • This pandemic amplifies the benefits of self-driving.

Here is an expanded and edited version of the conversation.

How do you ensure the safety of the system that you’re developing?

It starts with building a safety culture of the organization and having people understand one of the biggest impacts we can have with this technology is around safety and saving the lives that that will be lost in the future on various roads around the world.

We think about how do we build a technology that ultimately is sufficiently safe to be out on the road. And there’s really kind of two fundamental parts to that. One is when something breaks, how does this system respond? Does it understand something broke and does it mitigate the risk generated by the things that are broken? That is functional safety. And then the other part is if the thing is operating the way it’s supposed to, is it operating in a way that is safe in the world? That is safety of the intended function, or SOTIF.

One of the parts that maybe gets a little bit lost is that we need to be careful and thoughtful about what the threshold is that we accept of risk. We obviously want to drive that to zero over time. But it’s very easy to overlook the fact that the status quo is broken. There’s an incredible opportunity to move from the status quo towards zero. We should be saving those lives along the way and not wait for the perfect at the expense of all those lives.

Is there a cybersecurity risk with autonomous vehicles?

We live in an ever-increasingly connected world. We need to pay attention to it. And it is one of the risks we face even in my home right now. Right? I have a bunch of connected devices. There are people who may find malicious value to do something. I think the thing that is overlooked is that our vehicles today are already very connected. There are already risks to those platforms. We need to be working to secure the vehicle. I don’t know that AV makes it particularly riskier because the vehicles already have an immense amount of electronics in them. They’re connected, and problems can happen.

The other thing we need to think about is what’s the economic value.  Most cybercrime is about making money. There isn’t that much money to be made in kind of taking control of a vehicle on the road. While we need to be thoughtful about the long term kind of kinetic risk, I guess, and the injury risk that’s there. It’s important, but it’s not the most important risk that we have to be working against.

What do you think of the “trolley problem,” or when an AV has to pick between two bad outcomes?

The trolley problem is not new. We didn’t come up with it in AV. It has almost certainly been around in different forms for millennia. It’s really a question of how do we value life and what is our expectation on that. Part of the goal of building an automated driving system is for it to be an exceptionally good defensive driver. And so it should not be getting into a situation where it has to choose one or the other.

We need to put our focus on where the core of the problem is and deal with that and then have some thought about what we accept socially for that tiny sliver. But in the spirit of safety delayed is safety lost mentality, this is not the place practitioners in the industry should spend the majority of our time. But as a society, it’s a really interesting conversation to have, highlighting what do we value and how we ascribe value to life.

Where do you focus development efforts?

Unlike developing a web application or even a consumer electronics product, we have to tackle everything from the optimal electronics with the sensors, the electronics for radar and the high-speed computing and networking that happens in the vehicles through the software infrastructure that allows us to implement sufficiently real-time systems. The complexity of taking the signal coming back to the sensors, interpreting that, generating a model of the world, predicting how that model is going to evolve over the next few seconds, figuring out based on how the world evolves, how do we pick a safe path through it, translating that into something the vehicle can understand of how it moves through the world. And then adding on top of that the fact that we’re driving a multi-cam thing through the world.

We need to be doing all that at a very high level of reliability, robustness, and safety. And then we get into the business model. How do you take that technological marvel and marry that with the various industries, whether it be the truck or car OEMs, or the transportation and logistics companies?

This technology is going to move from these seeds and germinate into something that is ubiquitous and will yield all of the social benefits in terms of safety, improved access, and lower cost to move people throughout the world. We’ll see this happen in the coming decades.

Jerry Hirsch January 29, 2020
 Self-driving vehicle developer Waymo and shipping giant UPS launched a pilot program to use autonomous Chrysler Pacifica minivans to shuttle packages around the Phoenix metro area.

One Response

  1. Mike

    So what boggles my mind is “why?”. The big winners are the companies making them. The rest of us will have to deal with them on the road, or maybe be put out of work, or maybe I actually like driving. It’s all about money, it’s really not about safety or societal benefit. I never could imagine such a huge push for something that nobody really wants. I wish the efforts and money would be used to actually help society. What will we all do when automation does everything? Brilliant, let’s create a society where people have nothing to do. All this will widen the economic gap even more.


Leave a Comment

Your email address will not be published.


Subscribe to our mailing lists

Choose one or more topics: