Twitter and other social media where Tesla fans tend to hang out is abuzz with Tesla’s Safety Scores. The idea is that the scores (probably with some improvements) will be used to determine how risky a person’s driving is, and then be used to determine what rates a driver should be charged. While that’s not happening yet, the scores are now being used to determine who gets to participate in the FSD Beta program via “The Button.”
In another article, I addressed the privacy concerns, but I also wanted to address a more practical problem with this and other insurance monitoring schemes: an attempt to quantify something that’s inherently qualitative. In other words, trying to reduce complex information (the many, many decisions a driver makes in a week) to a one, two, or three digit number fails to capture important context that something like a driving history can better capture. The result is a number with no context.
To Be Fair, Some Of Humanity’s Biggest Successes Were A Result of Doing This On Purpose
Tesla can’t really be faulted for doing this, because western culture and science itself does this all the time, and for very good reasons.
As philosopher Philip Goff points out in Galileo’s Error, subjective things and the experiences deeply tied to our consciousness are very hard to analyze. We could endlessly debate like in The Philosophers’ Football Match (a Monty Python Skit), but as a civilization, we really needed to be scoring points and making advances. Goff knew that scientific thinking couldn’t analyze things that we only experience in our consciousness, let alone deep questions of spirituality or the metaphysical.
Galileo intentionally narrowed down science’s focus to the things that could be quantified (things like size, shape, position, motion) so that we could get down to business and solve problems. The scientific revolution that followed gave us everything from computers to limited space travel, but nobody really thought that things that could be quantified were the only things that matters. The idea was just to save those kinds of questions for later so we could get shit done.
Today, almost nobody who’s a serious player in the physics world really thinks particles are sizeless points or that there really was a singularity at the beginning of the universe. Truth be told, we don’t really know what particles are, but we’ve done a lot of very great things by skipping that kind of question and just focusing on what they do so we could get particles to do useful things.
Taking This Too Far
The problem today is that we’re taking this philosophy of quantification far, far beyond anything Galileo ever intended. Subjective things, things we only experience in our consciousness, spiritual questions, and the metaphysical aren’t nonsense that should be ignored or somehow turned into numbers we can do math on.
Want an example in the Tesla community? Let’s think about the comparison memes that have been floating around since the announcement of the Tesla Roadster 2.
In theory, the numbers are all true. Nobody disputes that. But, let’s look at another comparison with numbers that are 100% accurate:
Even if you’re a diehard Tesla fan, you’re probably seeing what the problem is now. Yes, it’s entirely true that the 1996 Toyota Previa has better specs in this comparison. The Previa can be had for 3 grand, and it comes with a fridge, more seating, more sunroofs, and more wipers. Plus, you can go buy one TODAY!
But we’re missing important context with these numbers, aren’t we? One vehicle is an efficient family hauler, and buying something from the last century is something you’d do to save money. The Roadster is meant to compete with supercars and hypercars, not old minivans with 300,000 miles. Modest differences in specs perhaps matter to other people, but to others it’s about the brand, the car being expensive/exclusive (the fact that the Bugatti costs $3 million will be a plus to some buyers who are just buying it to show off how rich they are), or the looks.
Honest, accurate numbers are great, but they aren’t enough when it comes to cars because there are other factors that can’t be turned into numbers and directly compared.
Why The Safety Score Sucks
The Safety Score makes a similar mistake.
Yes, the data it collects is probably close to 100% accurate. The computer can figure out what a hard braking event is, how close it is to the cars in front of it, and many other things. There may even be a good correlation between doing certain things on the road and getting into wrecks that back up Tesla’s choice of numbers.
The problem is that these numbers are largely divorced of context. Was the hard braking event caused by the driver’s excessive acceleration into a situation that requires hard braking (poor driving)? Or, did some moron pull out in front of the driver, who then had to brake hard to avoid a collision (an example of good driving)?
And all of this is before we consider the actual driving histories of the people getting these scores. If a person has zero tickets, zero collisions, and zero claims on their record, that’s a safe driver. You can’t take a sample of their driving data with no context, assign it a low score, and then say that person should be paying more for insurance. Insurance rates are supposed to be based on the risk of paying out claims, and a history of no claims and no tickets is enough.
With an at-fault accident, it’s indisputable that something unsafe occurred. It doesn’t take a mathematician, an algorithm, or an artificial neural network to see that a car or cars got smashed up. After an accident, an investigator must make a number of qualitative judgements to determine who was at fault.
When it comes to tickets, you’re getting information that was first judged by several humans (a cop, a judge, etc.) before it got counted as an infraction. The police officer had to observe some driving behavior and look at the totality of the circumstances before deciding that a qualitative violation of the law occurred. Then, a judge can look at the case and decide if it was really a violation. After all, police don’t go to law school, so the accused gets an opportunity to have the situation reviewed by someone who did.
If it’s a serious accident, nobody gets put in prison because some number got crunched and turned out bad. A jury of one’s peers gets as much information as allowed by the law, and then makes a qualitative judgement based on all of the context, and doesn’t just blindly look at a number supplied by a computer.
Thus, it’s fair to use the history of these kinds of things to judge a driver’s safety, because the context was considered, sometimes multiple times, at each step.
Garbage In, Garbage Out
Like anything involving computers, the idea of “garbage in, garbage out” applies here. Numbers that get collected up by a computer with no context are a very poor substitute for better data that factored that context in at each step.
Computers, science, and numbers have done amazing things for human civilization, but we can’t forget that a few numbers don’t tell the whole story.
Don't want to miss a cleantech story? Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!
Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.