Supercomputers To Provide More Reliable Wind Forecasts

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

Researchers utilizing supercomputers are hoping to deliver more precise and accurate wind forecasts in regions with complex terrain in an effort to help utility operators make more accurate predictions of when wind energy will be available on the grid.

Supercomputers have been used for a long time to increase the efficiency of wind energy — both its innovation, development, placement, and the prediction of weather conditions. As more and more wind farms are completed across the world, more information is being provided to increase the precision of models attempting to determine the efficiency of wind farms, as well as providing utility operators more accurate information so they can better predict future weather conditions.

Station locations and lists of instruments deployed within the Columbia River Gorge, Columbia River Basin, and surrounding region. Credit: James Wilczak, NOAA

The latest such endeavor is being conducted by a research team led by the US National Oceanic and Atmospheric Administration (NOAA), which is conducting simulations at the Argonne Leadership Computing Facility (ALCF), a US Department of Energy (DOE) Office of Science User Facility, in an effort to develop numerical weather prediction models that are able to provide more accurate wind forecasts in areas where a wind farm is located amidst complex terrain. Specifically, the team is gathering data and running simulations based on the Columbia River Gorge region, located across Washington and Oregon, and which collectively generates approximately 4,500 megawatts (MW) of wind energy. The region was chosen because the Gorge region is made up of difficult terrain and topography, which results in highly variable and currently unpredictable weather conditions. This in turn creates problems for the utility operators who need to accurate predict the amount and timing of wind energy delivery to the electricity grid.

Further, unpredictable wind conditions force utility operators to rely on steady energy sources — which in this region means coal and nuclear energy. This means that either wind energy is wasted, or energy generated by the more reliable but dirty and less flexible coal and nuclear energy is wasted if and when wind energy is delivered to the grid.

“Our goal is to give utility operators better forecasts, which could ultimately help make the cost of wind energy a little cheaper,” said lead model developer Joe Olson of NOAA. “For example, if the forecast calls for a windy day but operators don’t trust the forecast, they won’t be able to turn off coal plants, which are releasing carbon dioxide when maybe there was renewable wind energy available.”

With data gathered from the Columbia River Gorge region, the researchers then turn to Mira, the ALCF’s 10-petaflop IBM Blue Gene/Q supercomputer, which serves to “increase resolution and improve physical representations to better simulate wind features in national forecast models.”

The research team needs a model with physical parameters at 750-meter resolution, a 16-times increase in resolution which requires a lot of real-world data.

“We haven’t been able to identify all the strengths and weaknesses for wind predictions in the model because we haven’t had a complete, detailed dataset,” Olson said. “Now we have an expansive network of wind profilers and other weather instruments. Some are sampling wind in mountain gaps and valleys, others are on ridges. It’s a multiscale network that can capture the high-resolution aspects of the flow, as well as the broader mesoscale flows.”

How much data are they expecting to take in? With more than 20 environmental sensor stations throughout the Columbia River Gorge region, most of which are delivering data every 10 minutes, over a span of 18 months (beginning in October 2015, and completing in March 2017), the amount of data collected will ultimately add up to about half a petabyte — 500 terabytes, or 500,000 gigabytes.

That sort of data requires a supercomputer like Mira — not least of all because the team is hoping to simulate about 20 models.

“We wanted to run on Mira because ALCF has experience using HRRR for climate simulations and running ensembles jobs that would allow us to compare the models’ physical parameters,” said Rao Kotamarthi, chief scientist and department head of Argonne’s Climate and Atmospheric Science Department. “The ALCF team helped to scale the model to Mira and instructed us on how to bundle jobs so they avoid interrupting workflow, which is important for a project that often has new data coming in.”


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica.TV Video


Advertisement
 
CleanTechnica uses affiliate links. See our policy here.

Joshua S Hill

I'm a Christian, a nerd, a geek, and I believe that we're pretty quickly directing planet-Earth into hell in a handbasket! I also write for Fantasy Book Review (.co.uk), and can be found writing articles for a variety of other sites. Check me out at about.me for more.

Joshua S Hill has 4403 posts and counting. See all posts by Joshua S Hill