Smart cities continue to receive global attention. Alphabet (nee Google) is engaged in Toronto through its Sidewalk Labs company in creating a smart neighborhood on the formerly industrial waterfront. Microsoft is engaged through its CityNext initiative in bringing AI and IoT to urban issues. MIT continues to pay attention to the area, as do numerous Internet of Things (IoT) startups.
But all of the hardware and AI tend to occlude the heart and soul of smart cities, the intelligent and committed teams engaged in long running processes of data-gathering, analysis, and policy to improve the urban fabric. Most of us live in cities, and that trend is increasing. A smart city is shaped by its urban planners and civic organizations to make those cities healthier and more convenient for us, and more efficient for everyone, including business.
It’s worth going back to a basic definition and looking at some extensions. Wikipedia’s starting point is pretty good:
A smart city is an urban area that uses different types of electronic data collection sensors to supply information which is used to manage assets and resources efficiently.
One of the things that is good about this definition is that it’s not about visible gizmos and it doesn’t presuppose IoT solutions or CCTV. It’s about using the capabilities of the various sensors available to us to gather information, and then using other techniques to make cities better.
This has been going on for years, and we’re just getting better at it. It was common to see people standing on street corners with clipboards and the like counting passing vehicles a couple of decades ago.
Now, there are simple sensors run across roads, and increasingly bike paths, that send information on each passing vehicle to a computer.
Both examples used existing technology, but one is a lot more expensive than the other and a lot more error prone. And both have the same purpose: gathering data to support the existence or change of traffic infrastructure in the context of how it’s used and how uses are changing. This data allows better decision making and allows in many cases more controversial changes to be supported, typically in the case of cycling infrastructure.
For the most part, we ignore this data gathering process. It’s just invisible street furniture. And we see none of the analysis of the data happening in the background. And we see none of the position papers and policies and decisions arising from analysis of the data. But we do see better traffic flow.
A tremendous amount of urban infrastructure was built based on first principles of what a designer thought made sense with limited empirical evidence, based on historical examples or was just poorly thought out. Evidence-based decision making is resulting in a myriad of changes, many of which we don’t think of as smart city components.
A major example of this is the realization that road lanes were too wide and that there were two few solid objects close to the road. The North American highway designers of the early part of the 20th Century were working from first principles when they decided that 13-foot (4-meter) wide lanes and clearing shoulders of roads of all solid objects that might be struck would increase the safety of roads. They were wrong. Wider lanes and no roadside obstructions led to significantly increased speeds, even between lights with drivers accelerating hard and then decelerating hard. Higher speeds meant more frequent collisions of greater severity, both with other vehicles and with other road users such as cyclists and pedestrians.
But it took a lot of data to figure out that data was much harder and more expensive to collect prior to having a lot of cheap sensors. As a result, this error persists for 80 years and is embedded in most city streets today. What’s an outgrowth of this smart city learning?
A complete street is one which gives sufficient room for pedestrians, cyclists and cars. It features bike paths that are segregated from moving traffic and pedestrians. Left and right turns from the street are typically moved out of through lanes in front of parked cars.
This is a screen-grab from Google Maps Streetview from the street I live on in Vancouver, BC. Narrow lanes for cars. Separated bike path behind parked cars on the left. Trees growing close to the road separating pedestrians from traffic and cyclists. I could have walked outside and taken this picture, but standing in the middle of an urban street isn’t the wisest choice.
Why are complete streets a result of seeing that lanes were too wide? Well, widened sidewalks and the room for bike paths came from narrowing lanes and putting car parking between the bike path and the car. And trees planted between the road or bike path and pedestrians stand on ground both reallocated from the too-wide lanes, and they put physical boundaries close to moving cars.
The results are intriguing. Counter-intuitively, car flow on complete streets sees equal or higher throughput, but maximum vehicle speed drops a lot. The out-of-the-through-lane turning zones means through traffic isn’t impeded by cars turning left or right. Cycling and pedestrian traffic shoots up as both forms of transportation are isolated from faster moving, heavier traffic and are shaded by the trees. Complete streets give a lot less room to cars and put a lot more solid objects close to cars and total throughput of the street increases substantially, safety increases substantially and the city gets smarter.
If you look at before and after pictures of the streetscape above, what you would see are different paint on the road — a cheap intervention — and trees by the side of the road, which have multiple strong value propositions for cities beyond smart street traffic calming, including rain flooding diminishment, urban heat island reduction, and traffic noise mitigation. This wasn’t rebuilding a major artery, it was repainting it. That’s smart and inexpensive.
Complete streets are an example of smart cities.
But the example is an interesting one for another reason. There are no visible sensors in the picture shown above. There don’t need to be for the most part. Empirical evidence and sensors aren’t required all the time in order for useful information to be gathered. They can be set up, used for a period of time for a study, then moved elsewhere. Sensor data from other sensor sets can be culled. City accident reports can be assessed. Street light cameras can be used to assess speed. But a smart city doesn’t need to be instrumented everywhere for everything all of the time, as much as IoT manufacturers and major software and Cloud vendors might like them to be.
Here’s another example of smart city technology which we don’t really think about, transit integration with mapping software.
How does Google Maps know how to get from Vancouver to the Tswassen Ferry Terminal by transit? How does it know how long it will take?
Transit organizations in pretty much every city in the world expose their routing information in a fairly standard way for third parties to use. Companies like Google pick it up, then integrate it with their existing mapping for cars, including traffic slow downs and rerouting that they receiving from other sources.
All of a sudden, we have multi-modal transportation routing choices on our laptops, tablets, and phones, and can make choices that include transit easily.
And it’s pretty easy to find out when trains and buses will arrive these days. Most train stations have LED signboards saying when the next train will arrive, information that comes from both the schedule and sensors that say where trains are. Many cities have numbers you can text for each bus stop on a route which respond with the time when the next bus arrives.
All of this trivially accessible data allows us to be smarter about our transit choices and more efficient.
That’s a smart city intervention.
Another example I’ll provide is air quality monitoring and DieselGate. What do they have to do with one another? VW was caught by an independent emissions monitoring firm that strapped sensors to VWs and drove them in the real world, not by smart cities efforts, weren’t they?
Yes and no. About 15 years ago, Europe turned to diesel as the means to minimize CO2 emissions. It was a good interim choice in the era which predated electric vehicles being viable. They also invested heavily in transit, bike lanes, and walkability, all of which are more viable in densely populated European cities than in sprawling North American cities.
In part, this was based on the promises of the major European auto manufacturers about clean diesel technologies, the same technologies implicated in DieselGate.
After a decade or so of increased diesel use along with increased transit, biking, and walking, along with decreased emissions from coal generation, European cities were baffled and somewhat horrified to find that they had slightly worse air quality than they had had in 2000. Here’s an excerpt from a typical 2012 article:
More than 90% of people living in European cities breathe air that the UN’s World Health Organisation says leads to respiratory problems, heart disease and shortened lives, according to a study published on Tuesday.
This was specifically a case of sensors — basic smart city technology — collecting data which was analyzed over an extended period of time to determine if targeted desirable outcomes were being achieved.
And they weren’t. The smart cities were finding out that the collective decisions that they’d made were not turning out to be smart enough. And air pollution kills people. It diminishes quality of life. It makes people sick. It prevents people from choosing to bike or walk to work. It acts as an inhibitor to efficient operation of the city.
But they weren’t sure what was causing it. Lots of hypotheses were thrown around. Hands were wrung. There are an awful lot of sources of air pollution.
The DieselGate results led to a reassessment of air quality monitoring, and a bunch of new temporary sensors being put in along city streets for short duration monitoring. And it became clear that the promises of the manufacturers of diesel vehicles weren’t being realized, even where non-spoofed technologies were in play. In the real world, diesel still wasn’t clean.
Conditions were radically different in 2014–2015 than in 2000. Diesel wasn’t the only low-CO2 vehicle technology now that multiple manufacturers had completely adequate and often outstanding electric cars on the road. And for urban purposes, electric truck and utility vehicle fleets make extraordinary sense. So Europe is pivoting rapidly to electrification and away from diesel, to the chagrin of the European manufacturers who have spent 20 years investing in diesel instead of pivoting in the late 2000s when it was clear what the ultimate drivetrain winner was going to be.
So now European cities, and cities in North America, are getting cleaner air, which is smart. Due to sensors. And analysis.
But the intelligence isn’t some gizmo that picks a pattern out of a stack of hay in a millisecond and puts up an alert on your phone. It’s multiple data sets from multiple sources, analyzed by committed and intelligent people, resulting in changes which benefit cities. It means breaking away from the patterns of the past that were built on legacy technologies, first principles assumptions, and the lobbying of legacy industries and embracing what empirical data shows us.
That’s what is making cities smart.
The last example I’ll provide is from the opioid epidemic. I was at the BC Health Information Management Professionals spring conference recently (one of my domains of professional expertise is automation in healthcare). The morning was spent on the opioid crisis that’s sweeping North America and to a lesser extent Europe. BC’s Lower Mainland is ground zero for this public health challenge. BC has seen a rise of roughly 200 overdose fatalities a year to 1,400 a year in the past decade.
The most interesting presentation to me was from Surrey’s Fire Chief, Len Garis. He’s also Adjunct Professor, School of Criminology and Criminal Justice & Associate to the Centre for Social Research University of the Fraser Valley, Affiliated Research Faculty — John Jay College of Criminal Justice, and The Christian Regenhard Centre for Emergency Response Studies, New York. Yeah, he has a lot of hats.
His presentation was focused on the use of evidence-based interventions with clear monitoring of results to deal with the opioid crisis. He worked with multiple agencies and Microsoft to create multiple geographic and statistical visualizations in near-real time and over time for opioid overdoses and related events.
This is one of his dashboards, of the primary focus area for opioid and drug challenges in Surrey. It shows different rates of opioid overdoses, and among other things is used to detect what overdose outbreaks, typically related to an overly strong proportion of fentanyl in a batch of drugs. When these incidents occur, data gathering sweeps them rapidly into the system through multiple means. And when an outbreak occurs, texts and other messages are sent to first-responders to let them know that an area has bad drugs. And then intervention kicks in rapidly.
Garis and his team have used this strongly evidence-based approach to identify patterns of property crime related to opioid-addiction. They’ve used this to hypothesize interventions such as rehab house assessments of naxalone and related drug-safety measures against outcomes.
This is all smart city efforts, in this case aimed at a current scourge, fentanyl. It’s part of what is preventing a tremendous number more overdoses in the Lower Mainland than would occur otherwise.
And this strongly evidence-based approach is part of the reason why Garis has all of those other hats. It’s been deeply effective and with provable results at a low cost in Surrey, and he’s helping other cities become equally smart around the world. He’s integrated the efforts of first-responders, health care, and social services to deal with an epidemic.
Technology giants such as Alphabet (nee Google), IBM, and Microsoft have invested heavily in smart cities concepts and approaches. Most of them have not seen significant revenue from these efforts and most of the efforts have a whiff of marketing about them. IBM was in Rio de Janeiro setting up a smart cities operation center concept about a decade ago, one which was rarely sold. Alphabet is in Toronto with its Sidewalk Labs initiative in a new industrial land makeover.
Microsoft’s work with Surrey is the strongest evidence I can find of a tech giant garnering excellent results. And mostly that was just providing a Cloud-based toolkit that was inexpensive enough for a dedicated team led by a visionary such as Garis to create an intelligent set of responses.
A great deal of the low-hanging fruit has been plucked for smarter cities. Urban budgets are always under stress, and analysis takes time and money.
Smart cities aren’t IoT sensors for everything on every corner all of the time. They aren’t just dumb CCTV cameras everywhere providing post facto blame allocation for crimes and accidents. Sensors are just the start. And it’s possible to overload the limited analysis bandwidth we have available with too much data.
Most of us are living with evidence of smart cities under our eyes. We just don’t realize it.