Is SCAR A Better Measure Of Charging Station Reliability?

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

A recent blog post from Electric Era, a charging company installing battery storage charging stations in the western US, shared some interesting news from California. The state’s Energy Commission announced that instead of using stall uptime or station uptime, or measuring how often a person can get a charge at all, it would instead use “successful charge attempt rate,” or SCAR. The Commission defines SCAR and sets a goal:

Ninety percent of the time that a customer attempts to initiate a charging session at a regulated charger the charging session must last at least five minutes, which will be considered a successful charge for this regulation. The minimum SCAR is defined on a per-port and not-site basis; each charger at a charging site must achieve a SCAR of at least 90 percent to comply.

Spoken more plainly, each stall at every charging station must successfully start a charge and hold it for at least five minutes 90% of the time. Being able to charge on a second or third try doesn’t cut it. Being able to charge at another stall also doesn’t cut it. 

But, is this really the best way to measure reliability? To answer that question, we must first look at the alternatives and what their ups and downs are.

Other Ways To Measure Reliability

One big way we’ve seen reliability measured is the PlugShare way. Either you get a charge or you don’t. For this reason, a station can have most of its stalls down and still get a perfect 10 score, just as long as someone gets a chance to charge. This has the upside of letting charging providers rely on redundancy to keep their scores up, but leaves people seeking a charge unaware about things like dead stalls, slow charge rates, frustrating plug-unplug-replug hassles, and such.

Uptime can also leave important context out. For example, 97% uptime (NEVI requires this) sounds great, until you consider what 3% of a month is: almost 22 hours! A station can be down for almost a whole day per month and still look shiny. On an annual basis, you’re talking about as much as 11 days of acceptable downtime, which obviously stinks. The other question is this: how does uptime get measured? Whole network? Whole site? Each individual stall? The possible ups and downs to all of those would take a long time to fully explore, but something like averaging uptime of all stations across a whole charging network could leave a lot of busted machines that strand people while still giving a great figure.

I’ve come up with a more complex system for measuring station reliability that I shared in another article. Instead of trying to distill EV charging down to one number, I suggested a multi-point set of statistics. A simple up/down, number of stations available now, uptime for past month, and then a user rating where people can give a subjective score that gets averaged. After thinking about that more, it’s problematic because that’s more numbers than a person looking for a charger probably wants to look for. The subjective score can also be manipulated by competing companies and their shareholders.

So, even my idea from last year kind of sucks.

Where SCAR Really Shines

Compared to other systems to measure reliability and share that information, SCAR is great because it’s simple, it measures every attempt to charge, and it gets applied at every stall (and then presumably averaged). But, because this is measured by an agency overseeing the network, grants, and other aspects of charging, it can give an opportunity to see where things are going wrong and take a deeper look. This can, in turn lead to further study that’s needed to make good decisions, especially those relating to handing out government money.

In other words, SCAR is a great system that can be used to manage a network at the site, network, and governmental level when people go to make further decisions of some kind.

Where SCAR Falls Short

What I don’t think SCAR will work well for is user-facing ratings. If you’re looking for a charging station, knowing how often it charges on the first try is useful, but you don’t know if there are other problems the site could have.

For example, if you see that a site always charges on the first try, you could show up and get a charging session at half speed. You need the range and won’t want to leave, and you might not be able to switch stalls because the others are busy. So, your charging session contributed to a great score, but it wasn’t a great charging session.

Another way SCAR could fall short is in places where people don’t attempt to use a broken charger. It’s common for EV drivers to put the cables up and wrap them over the top of a station to signal to other drivers that the stall is down. So, if people just don’t try to charge, there won’t be any more failed sessions, which will mean the score doesn’t go down to signal that there’s a problem. 

I’m sure there are other problems that SCAR doesn’t reveal to users looking for a charge. Feel free to look at that more in the comments or on social media.

Different Tools For Different Jobs

For government regulators and grant providers, SCAR is a great tool to have in the toolbox. It helps to set a minimum goal, and 90% SCAR means that 90% of the time a plug means a charge. Networks that can’t meet that at the stall, station, or network level probably shouldn’t be rewarded with more grant funding.

But, for users, there still needs to be a multi-numbered approach. Objective things, like how often people have been able to charge, are important. But giving a station a star-rating along with raw numbers can give users the opportunity to see when a station isn’t giving customer satisfaction. This gives the driver an opportunity to swipe through the reviews to see what exactly is wrong.

So, I’d have to conclude that SCAR is a good quantitative measurement we can use to make rules with, but we need something to capture whether drivers are happy with the station. At the end of the day, that’s what we really want: happy EV drivers.

Featured image by Jennifer Sensiba.

Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica.TV Videos

CleanTechnica uses affiliate links. See our policy here.

CleanTechnica's Comment Policy

Jennifer Sensiba

Jennifer Sensiba is a long time efficient vehicle enthusiast, writer, and photographer. She grew up around a transmission shop, and has been experimenting with vehicle efficiency since she was 16 and drove a Pontiac Fiero. She likes to get off the beaten path in her "Bolt EAV" and any other EVs she can get behind the wheel or handlebars of with her wife and kids. You can find her on Twitter here, Facebook here, and YouTube here.

Jennifer Sensiba has 2018 posts and counting. See all posts by Jennifer Sensiba