Southern Florida Among Spots At Greater Risk Due To Sea Level Rise, Finds New Machine Learning Study

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

Sea level rise is a much studied phenomenon. It’s caused by global warming. First, the additional heat melts land ice. Then, the water is warmer, so it expands a bit. The combination means that as the world warms over the next few years, sea level rise will accelerate. By 2050, we have a very high confidence that we’ll see 20-30 cm (8-12 in) of sea level rise. The outlook for 2100 has more variance because it has a lot more room for us to mitigate warming and more room for things to go wrong, but the medium is about a meter (39 inches).

We thought we had a pretty good idea what that meant, as most people assume that elevations along coast lines were well understood. Too bad that’s not actually true.

A study published October 29, 2019 in Nature Communications, a natural sciences journal with a very credible impact of 11.88, has improved the state of the art significantly. The study is New elevation data triple estimates of global vulnerability to sea-level rise and coastal flooding, by Scott A. Kulp & Benjamin H. Strauss. Yes, with higher accuracy unfortunately came much higher risk.

To understand what the study did, it’s important to know how elevation is measured and communicated. There are numerous digital elevation models (DEMs) which provide the elevation to researchers, policy makers and the like around the world. In a lot of urban areas in wealthy countries, the elevation is very accurately measured by lidar overflights from planes and increasingly drones. In the US, most of the coast is well mapped by lidar. But that’s an expensive way to determine elevation. Most of the world’s elevation is assessed from NASA’s Shuttle Radar Topography Mission (SRTM). That was captured in 2000, available to some people and researchers from that time at full resolution, but available at full resolution to everyone as of 2014, when the US White House announced that it was publicly available to anyone.

NASA shuttle radar topography system banner
SRTM banner courtesy NASA

What’s the problem with the SRTM data set? Well, in a lot of places with dense foliage or buildings, it keyed off of the top of the foliage or building, not off of the ground level. Yes, in a lot of places, elevation is overstated by the SRTM data that everyone outside of rich urban areas uses. That’s a big deal in coastal regions facing sea level rise. Extreme coastal water level (ECWL) exposure analysis is keyed off of SRTM data for most of the world, and for a lot of the US as well. ECWL is about areas prone to regular flooding, not areas that will be below average sea level for the area.

What Kulp and Strauss did was to define and execute on a methodology to adjust the SRTM data for coastal areas to fix the data as much as possible to align it with actual elevation.

Here’s where machine learning comes in. They took the SRTM data as the input, fed it into a multilayer perceptron (MLP) artificial neural network and used United States lidar data for specific areas to train it how to adjust the SRTM elevation to the actual elevation. Then they tested the results in multiple areas in the US and Australia to validate that the resultant model hadn’t been overfitted. We’ll get to the results, which are worse the further you read into the study, soon, but this is one of the series of articles on the use of machine learning in clean technology and climate solutions, so we’ll spend a bit of time on the neural net approach itself.

A multilayer perceptron neural network has a few characteristics. It has an input layer, in this case the NASA elevation data in SRTM. It has an output layer, the resulting new elevation map. It has a hidden layer or layers which are the neural net, which take each block of available data, apply the neural network weighting on it, and provide the output. The output is tested against the lidar data which is highly accurate for a large sample size so that the model is trained to correct the SRTM data to align with lidar data where that’s available.

The input layer is a bit more sophisticated than just a picture. The researchers had a 23-dimensional vector of known attributes at the target location from various data sets. The variables included neighborhood elevation values, land slope, population density, vegetation density, canopy height, and local SRTM deviations from snow and ice cover provided by ICESat. They trained the model on 51 million samples of these 23 variables. The output layer is simple, with literally just a prediction of the SRTM error at the location.

It’s worth poking at the location a bit. Before 2014, most of the SRTM released data was at a resolution of about 90 meters (295 ft). The newly available data was, like the unlocking of high-resolution GPS in 2000, higher resolution, about 30 meters (98 ft). However, a lot of the other data sets are at vastly different resolutions. Population data is at the scale of a kilometer (0.62 miles) for example, but even that varies. A lot of data management was used to align the 23 input variables. Because of that resolution gap, their output resolution is about 90 meters (295 ft), similar to the pre-2014 SRTM data.

As with all neural networks, there’s little way to know what they do inside the neural network itself. All we really can do is compare the accuracy of outputs to arrive at confidence levels.

And the resulting data set, CoastalDEM, is much more accurate than SRTM, especially near the water. The mean SRTM error from 1-20 meters (3.3-65.6 ft) above sea level in the USA is 3.7 meters (12 ft). In Australia, it’s 2.5 meters (8.2 ft). Globally with the lower resolution ICESat, it’s about 1.9 meters (6.2 ft). When tested against the coastal cities of the US, CoastalDEM reduces the error from 4.7 meters (15.5 ft) to less than 0.06 m (2.4 inches). Remember that for the cities in rich countries, extreme coastal water level (ECWL) exposure analysis is already keyed off of the lidar data, so this doesn’t mean that New York is going to be underwater faster.

But it does mean that a lot of other cities in the world will be. And a lot of coastline that’s not as well mapped with lidar is in much more serious trouble much sooner.

Let’s look at southern Florida. Here is the legacy model of sea level rise risk by 2050.

Image of southern Florida with sea level rise exposure with legacy data by 2050 in red
Image courtesy climatecentral.org

That looks very reasonable. But let’s look at the adjusted map using CoastalDem updates.

Image of southern Florida with sea level rise exposure with new CoastDEM data by 2050 in red
Image courtesy climatecentral.org

Oops. That’s a lot more red. And that red is in a very bad place for southern Florida. Note that it’s not where there are a lot of people, which is a modicum of good news, but instead it’s over a lot of the Everglades, which filter the water flowing into the Biscayne Aquifer, replenishing southern Florida’s fresh water supply.

Let’s look further south, at Key West and its neighboring keys.

Image of Key West with sea level rise exposure with new CoastalDEM data by 2050 in red
Image courtesy climatecentral.org

The Florida Keys are a maximum of 1.8 meters (6 ft) above sea level. Large areas of the Keys, especially the southernmost ones, will be inundated regularly by 2050. Most of the errors in extreme coastal water levels (ECWL) risk assessments in the US are in Florida because of the population density in coastal plains.

But the United States isn’t where the largest impacts will be. Among other things, all cities in the US were using accurate lidar data for their ECWL assessments already. The problem is for densely populated foreign cities.

Extreme coastal water level exposure in the Pearl River Delta of China and Bangladesh
Image courtesy climatecentral.org

The darker magenta is predictions from CoastalDEM alone. The lighter violet are predictions that both SRTM and CoastalDEM are making. The spots of yellow are areas where SRTM is wrong in the other direction and CoastalDEM thinks that there isn’t a threat.

There are 42 million people in the Pearl River Delta. Bangladesh, which has a total population of 164 million, experienced flooding in the climate change-exacerbated 2017 monsoons that flooded a third of the country and displaced 41 million people.

With the new CoastalDEM, machine learning-enabled predictions of extreme coastal water level risk exposures are much higher. Legacy models showed 250 million people at risk by 2050. With CoastalDEM, a hundred million more people are at risk. By 2100, 630 million people will be exposed to regular flooding from tides.

As other articles in the series have highlighted so far, machine learning is helping us to rapidly plan and estimate commercial solar, identify opportunities for planting trees to mitigate change, and to maintain the purity of the water we use. And, of course, solve Rubik’s cube with a one-handed robot. But as this case study shows, it’s also clarifying the level of risk we’re at from climate change.


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica.TV Video


Advertisement
 
CleanTechnica uses affiliate links. See our policy here.

Michael Barnard

is a climate futurist, strategist and author. He spends his time projecting scenarios for decarbonization 40-80 years into the future. He assists multi-billion dollar investment funds and firms, executives, Boards and startups to pick wisely today. He is founder and Chief Strategist of TFIE Strategy Inc and a member of the Advisory Board of electric aviation startup FLIMAX. He hosts the Redefining Energy - Tech podcast (https://shorturl.at/tuEF5) , a part of the award-winning Redefining Energy team.

Michael Barnard has 707 posts and counting. See all posts by Michael Barnard