The recent run of crashes of cars with autonomous driving features — including the Tesla Model S fatality in Florida — highlights the most important question people can ask about self-driving vehicles: How safe is this technology?

Cars manufactured by Tesla, which has aggressively added self-driving technology under the misleading Autopilot marketing label, have been involved in most of the reported collisions. While the crashes are tragic, don’t be too quick to blame Tesla or its technology. Vehicles without self-driving technology crash at a higher rate than cars that are equipped with self-driving technology.

Yet the mishaps highlight a real issue: The current state of self-driving tech is a problem.

Automakers are increasingly putting self-driving technology into the hands of drivers who have had little to no orientation and virtually no training on how these features affect their driving, road perception and attention. This is an important omission. When self-driving tech is enabled, a vehicle may behave in unfamiliar and surprising ways.

There’s also the inevitable driver confusion over exactly what the autonomous features do. Tesla’s Autopilot is that. You can’t just flick a switch and sit back and let the robot drive. You have to stay engaged. Moreover, similarly labeled features from different automakers will have different features and limitations, so you can’t just jump from car to car and expect everything to behave the same way.

On some vehicles, stop-and-go cruise control might only work when travelling over a seemingly arbitrary 37 mph. On others, it might only work up to speeds of 30 mph.

Despite this period of confusion and uncertainty, more regulation is not the answer. More regulation would bring innovation to a standstill and delay the promise of self-driving vehicles to improve safety and save lives.

This is already happening with headlights, which are heavily regulated in the U.S., while in Europe and other regions new technology has enhanced safety. Recently the Insurance Institute for Highway Safety tested headlights on small sport utility vehicles sold in the U.S. Not a single vehicle out of 21 tested by the insurance industry trade group earned a good rating for how far headlights illuminate the road on both straightaways and curves, and for how much glare is produced for oncoming drivers.

This is the case despite data that shows about half of traffic deaths happen in the dark or in dawn or dusk conditions. Better headlights should reduce those fatalities, IIHS said. Why hasn’t this problem been solved quickly? Getting approval to use even obviously beneficial new technology can take years.

We have one of the most heavily regulated transportation systems in the world. If we could simply regulate our way to improved safety, we should have one of the safest systems in the world.

We don’t.

The U.S. has 7.1 traffic fatalities for every 1 billion kilometers driven, according to the World Health Organization. Canada and nearly every country in Western Europe have lower rates. It’s just 3.5 deaths in Sweden, 4.5 in the Netherlands and 4.9 in Germany, even with its autobahns — where there are often no speed limits.

Traffic Deaths List

(Source: World Health Organizations 2013)

So if not regulation, what would ease the rollout of self-driving cars and trucks?

John Adams, a retired geography professor at University College London, has perspective on human behavior and risk tolerance that provides guidance.

Over coffee in his London apartment in 2011, Professor Adams taught me the concept of an individual’s “Risk Thermostats.” When people feel at risk, they behave more cautiously. When they feel safer, they are more comfortable with taking risks.

Adams first explored this idea nearly three decades ago while looking at traffic safety in Sweden and Denmark. They had similar rates of transportation safety as measured by insurance claims. But then Denmark introduced a seat belt law. Insurance claims jumped, while the claims for Sweden remained the same.

That was a surprising outcome. Adams found at the time that using a seat belt helped car occupants escape injury or death but made it more likely they would be in a crash. (Which can be very bad news for pedestrians and cyclists.)

When using seat belts, we are likely to feel safer, which often means we are more likely to drive in a way that is slightly riskier. Over billions of miles, this small increase in risk results in a higher rate of crashes.

If environments where drivers feel safer result in riskier behavior, could the opposite also be true?

Yes.

Traffic planners in Bath, England, count on it. They redesigned the roadways at the city’s train station, “reverse”-banking the main circular cobblestoned road that serves the station. That made drivers feel as if their vehicles were moving more quickly. Then planners removed traffic signs and pedestrian markings, including crosswalks.

This created a mild feeling of panic for everyone. Traffic compensates by slowing. It forced drivers and pedestrians to pay attention to one another. Signals such as eye contact and waved hands serve to “manage” traffic and pedestrian interactions.

This may seem like transportation anarchy. But I have seen it in action — and it works. The rate of traffic incidents has dropped.

Experiments like the train station in Bath have encouraged traffic planners in Europe to develop a concept they call “shared spaces.” The idea is that roadways can be designed in a way that encourages pedestrians, bicyclists and motor vehicle drivers to “share” the road in a manner that improves safety.

One country that has really run with this concept is the Netherlands. One town there, Makkinga, has removed all traffic signs. Drachten, another town, has a road that runs through the middle of a playground. Europe also makes far greater use of roundabouts, which have been shown to be much safer than traffic lights.

All of this is designed to make people pay attention.

Today’s self-driving tech is a great example of Professor Adams’ Risk Thermostat problem. Stop-and-go cruise control, emergency automatic breaking and lane keeping are all features that can encourage risky behavior if their capabilities are oversold. (This is where I do fault Tesla Motors.) Drivers have turned their own personal risk thermostats so low as to feel comfortable posting videos of massively inattentive driving on YouTube and other websites.

Imagine that a driver has Tesla’s Autopilot feature engaged and is cruising comfortably down the highway. Traffic is light, and with nothing to do, the driver lets his or her attention drift. Until Tesla and other manufacturers have a true autopilot that reliably drives the car without ever needing more driver input than the destination, this is a really dangerous situation. Driving a vehicle safely involves a large number of cogitative activities, some conscious, some less so. This explains why we are tired after a long drive. Our brain has been busy.

Tesla Crash Mangeld Car

Damage to Tesla Model S in Florida crash with semitrailer. (Photo: Florida Highway Patrol)

In the Tesla scenario in Florida, the driver had shut down most of the cognitive activities associated with driving. These activities don’t just start up again instantly when the Autopilot system encounters an unexpected situation like a truck it has mistakenly recognized as a billboard. The human brain is simply not wired to instantly go from a relaxed state to one where it can quickly assess a situation and take the appropriate corrective action.

A shift in conscious state takes a moment or two — time enough for a collision to occur. This moment of confusion is called inattentional blindness. Self-driving technologies bring this to the fore in our industry. It’s a problem in other vocations: Air France Flight 447 crashed off the coast of Brazil in 2009 when the autopilot suddenly shut down after getting confusing speed data from an iced air speed sensor. The co-pilot, who was flying the plane, could not resume control quickly enough to prevent the crash.

I spent the last several months testing the current slate of vehicles offering self-driving features. During my tests, all failed at one point or another. So it is not a question of if a driver will need to take over but when.

For now, these systems should be viewed as backups for an attentive driver. Such labeling requires manufacturers to walk back the marketing claims being made about self-driving tech. The features should be designed so that when engaged, they would provide no support to routine driving. They would jump into operation only if vehicle and driver safety were threatened.

Simply telling drivers they must remain alert, while providing features that encourage the opposite, is a horribly bad idea. Terms such as “autopilot” and “self-driving” are poorly chosen. Today’s technology is neither.

At some point, self-driving technologies will have improved to the point that failure is extremely rare. Safe, truly self-driving, fully autonomous — which the National Highway Traffic Safety Administration defines as Level 4 technology — vehicles will become a reality and my concerns will no longer be relevant.

Until then I think of a conversation with John Krafcik, chief executive of Google’s self-driving car project, at the New York International Auto Show earlier this year.

Self-driving cars are both “closer than you think and further than you might imagine,” he told me.

His point didn’t fully register at the time, but I think his idea was simple: Self-driving technologies are already here, starting with the introduction long ago of automatic transmissions. But fully self-driving vehicles, operating at a level of safety that will be acceptable to society, are perhaps decades off.

There’s no doubt the safety gains will be enormous. But until then we face a difficult transition where cars need to be designed to avoid luring humans into vehicular somnolence.