Finding Dust in the Wind

Little by little the sky was darkened by the mixing dust, and the wind felt over the earth, loosened the dust, and carried it away. … the dust lifted up out of the fields and drove gray plumes into the air like sluggish smoke. (John Steinbeck, The Grapes of Wrath)

Dust can be a minor irritation, like the dust bunnies under your bed, or it can literally wreak havoc, as chronicled in Steinbeck’s 1930s novel. Winds can swirl dust up into tremendous plumes that stream like rivers through the atmosphere from continent to continent. Then the dust sifts down from the sky, permeating everything it touches. In such proportions, dust can impact air quality, human health, weather patterns, agriculture, and transportation. Farmers, hospital and airport officials, and others at threatened locations need sufficient lead time to prepare.

Earth scientists at NASA’s Marshall Space Flight Center are using data science to help forecasters more efficiently identify dust events. This venture could improve timeliness and accuracy of public dust storm warnings.

“The latest Earth observation satellites generate many terabytes of data per day,” says principal investigator Emily Berndt. “Finding ways to sift through all the data and detect features and hazards efficiently is key to harnessing its benefits. As scientists, we know which satellite data can show us dust, but can we use machine learning to identify dust in the imagery before our eyes can?”

Her team is showing the answer is yes.

Berndt is remote sensing lead for SPoRT, NASA’s Short-term Prediction Research and Transition center. The SPoRT team conducts research toward improving operational decision making and develops ways for weather forecasters and others to use satellite observations in their everyday operations.

SPoRT has a history of providing the National Weather Service with NASA satellite imagery enhanced to improve dust forecasting. They first used images from NASA’s Moderate Resolution Imaging Spectrometer, or MODIS, and NASA/NOAA’s Visible Infrared Imaging Radiometer Suite, or VIIRS. These instruments reside on polar-orbiting satellites, providing imagery twice a day.

“Using this enhanced dust imagery prepared forecasters for using the next generation Advanced Baseline Imager aboard the latest NASA/NOAA Geostationary Operational Environmental Satellites (GOES-16, GOES-17, and GOES-18),” notes Berndt.

The ABI has 16 channels and images Earth’s weather, oceans, and environment as often as every 30 seconds. The imagery is spectacular, but the “embarrassment of riches” presents challenges.

“It’s difficult to synthesize that much information into a single image forecasters can easily interpret,” explains Berndt. “Clouds, smoke, and darkness make it hard for the sensor to differentiate dust from other elements it’s imaging. And as Earth’s surface cools at night to a temperature similar to that of the dust, the sensor has trouble distinguishing the dust from the surface.”

Finding Dust Consolidated Images 1

Enhanced dust imagery from GOES-16 on March 17 3:46 pm (left), March 18 8:46 pm (center) and 12:46 am (right).  Dust, typically magenta color in the images, is difficult to detect as night-time progresses.

Until recently, the SPoRT team had little experience in machine learning, but they knew it might help. Right down the hall sat data scientists from IMPACT, the Interagency Implementation and Advanced Concepts Team. Machine learning is one of their specialties.

With machine learning, a computer model can be trained to detect subtleties in an image. SPoRT’s idea was to use machine learning to train their dust model, enhancing its accuracy.

“IMPACT helped us understand which types of machine learning models to test and how to evaluate our training dataset,” says Berndt.

Their approach is supervised machine learning, meaning the model is created using large amounts of training data–many different images–to teach it to recognize patterns. This equips it to make decisions on its own toward correctly classifying newly input data received in real-time as dust or not dust.

“The model is only as good as how you train it,” notes Berndt.

SPoRT identified all kinds of dust events: strong events driven by cold fronts, weaker events that were lofted, short/long duration events, and events occurring at different times of night. Then they labeled the GOES-16 satellite imagery and used it as a dataset to “teach” the model to identify dust as separate from the Earth’s surface, smoke, and cloud tops.

While the computer has been learning, the humans have been busy.

“We’re improving the machine learning model for identifying dust both night and day,” says Berndt. “We had forecasters at six National Weather Service forecast offices in the southwestern US assess the model and give feedback.”

One of many comments: “I found the ML output very useful for tracking the plume….”

Of note was a March 17 event. Dry weather and high winds sparked a 40,000-acre fire in Eastland County, Texas. Forecasters were tracking a complex situation where both a dust event and fire were occurring.

The Midland, Texas, weather forecast office noted: “The ML probabilities matched observations and supplemented our decision making when issuing a Blowing Dust Advisory [and] our briefing to Emergency Managers dealing with an ongoing large wildfire. The fact that it matched observations early on greatly increased our confidence in the ML model output and future trends into the evening.”

Finding Dust Consolidated Images 2

Machine learning dust probability overlaid on enhanced dust imagery from GOES-16 from March 17 3:46 pm (left), March 18 8:46 pm (center) and 12:46 am (right).  Machine learning dust probabilities helped forecasters track the dust plume longer into the night-time as compared to use of the imagery alone (compare to previous images).

The SPoRT team is reviewing all the feedback and dust cases. “The hits and misses will help us understand our model’s strengths and limitations and guide us in expanding the training dataset and making improvements,” explains Berndt. “After running the model in near-real-time this spring, we have a better sense of types of events we’re missing, false alarms, and when our model is performing well.”


Scroll to Top