Previously I’ve done more detailed case studies of how machine learning is being used in various climate research and clean technology applications. But it’s been a far from exhaustive examination so far, covering previously optimistic coastal digital elevation models, waste stream sorting, commercial rooftop solar panel layout, utility and industrial water quality management, renewables and storage market optimization, building and HVAC efficiency, and the faint hope of optimizing concentrated solar heat generation.
There are many more researchers driving programs that will benefit us and the environment using this core, transformative set of technologies. And so, this roundup of several more research initiatives delivering value both large and small with neural nets.
First up is Elizabeth Barnes, an assistant professor at Colorado State University, and the US National Oceanic and Atmospheric Administration. As a physics major, she spent her summers researching neutrinos before pivoting to a PhD in atmospheric research. She’s a lead of the NOAA-funded Subseasonal to Seasonal (S2S) Prediction Task Force, which is working on two-week to two-month weather prediction. Not climate. Weather. Not days. Weeks and months. They already have a good handle on atmospheric rivers 2-3 weeks in advance, which is key to longer term weather prediction. Their neural network is able to identify patterns of forced change of surface temperature as early as the 1960s in climate model simulations.
Next is bird migration and the efforts of Kyle G. Horton, Frank A. La Sorte, et al. Their research has looked at decades of bird migration for hundreds of species, weather, and migration timeframes using the massive data crunching abilities of neural networks to find signal among the noise. Their specific concern per their published research is the strong likelihood of mismatch between timing of bird migrations and necessary resources along their routes as species adapt at different rates, or fail to adapt entirely.
From the sky to the ground we go, to S M Piryon and Tamer El-Diraby out of Canada. Their research focuses on changes in road deterioration rates and types of deterioration due to climate stressors over upcoming years. Traditional models for road deterioration rates are based on a stable climate, an assumption which is no longer reasonable. They took over 1,000 samples from the Long-term Pavement Performance database, trained their machine learning system and now make it available via a web-browser for predictions of any road segment interested parties want to enter, giving 80% accurate pavement performance index results for years in advance, within the planning time scenarios of agencies which maintain the roads.
Staying in Canada, but heading undersea, we find Alireza Rezvanifar, Tunai Porto Marques, et al. They’ve been working on better identification of fish species and numbers from very noisy sonar data.
Typically analysts take echograms such as the one above and use the squishy neural nets inside their skulls to attempt to identify and quantify fish stocks. As I know from my recent reading through Thinking, Fast and Slow, that’s exactly the type of work that humans are very good at until we aren’t, with things like hunger, closeness to breaks, and how many of these echograms we’ve looked at likely to create massive disparities in results qualities. Machine learning systems, of course, don’t suffer from these all-to-human problems and can look at echograms endlessly with consistent results. Their initial efforts are already more accurate than human analysts at identifying and counting herring schools, and are easily extensible to other fish stocks in other regions.
Staying below the surface, but shifting to a United States and German team of researchers, Tom Weber, Nicola Wiseman and Annette Kock, representing three separate universities, we find oceanic methane emissions. As someone who lives on the edge of the Pacific and who has lost track of the numbers of times I’ve crossed it, I know how big and how sparsely instrumented it is, and at that the portion between North America and Asia is the best monitored portion of this massive body of water. Further, compared to it, some other oceans such as the Arctic Ocean are even less well instrumented. This is more critical than ever as the ocean is one of the sources of atmospheric methane from not only anoxic sources, but also the now more understood and accepted phytoplankton methane-creation pathway. Understanding how much methane the ocean produces, where and how is important to understanding how human emissions play into that, and it’s not as well understood as it could be yet.
“The methane budget helps us place human methane emissions in context and provides a baseline against which to assess future changes. In past methane budgets, the ocean has been a very uncertain term. We know the ocean naturally releases methane to the atmosphere, but we don’t necessarily know how much.” Tom Weber, Assistant Professor, Earth and Environmental Sciences, University of Rochester
Machine learning is perfect for this task. “Ground truth” from the well-studied portions of the ocean can be applied to the rest of the oceans globally with higher precision than previous estimates. The global methane budget that’s used in research and policy efforts will contain the results of this study, leading to global ramifications from this machine learning effort.
Surfacing again, let’s head to Africa via Penn State. That university has a long-running program focused on improving African small farm agriculture results, increasing resiliency and economic success in this industry. They’ve built the NURU (Swahili for light) app to provide smart phone guidance and insights to farmers. Now they’ve folded in machine learning predictions of short-term crop productivity based on satellite water data and weather reports.
Finally, let’s head back up into the air for this study out of California by Shane R. Coffield, Casey A. Graff, et al., from the University of California Irvine. Their research published in the International Journal of Wildland Fire predicts the end scale of wildfires from initial smoke plumes for Alaskan fires, something with global resonance given the Australia and Asian wildfires and the recent history of North American west coast fires and smoke pall. They applied a multitude of techniques to try to predict this key measure for resource allocation, including machine learning approaches. But a key takeaway is that the most effective measure with just over 50% accuracy was a simple decision tree that could be done by a human in a few minutes. While machine learning is an extraordinary kit of tools and techniques, that doesn’t mean it’s always necessary or better.
The varied techniques and technologies in the category of machine learning have become much more accessible to many more researchers and are enabling massive gains in climate and environmental research. Global policy and local budgetary and response practices are increasingly informed by this transformative approach.
Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.
CleanTechnica Holiday Wish Book
Our Latest EVObsession Video
CleanTechnica uses affiliate links. See our policy here.