Supercomputing Shakes Up Earthquake Awareness

Scientists say it’s time to put more sensors in the ground so today’s supercomputers can help us better understand earth movements and prepare for the next Big One. 

People in the San Francisco Bay area are often warned to prepare for the “Big One,” but few imagine that a supercomputer might save their lives.

This year marks the 25th anniversary of the devastating Loma Prieta earthquake, which reached a magnitude 6.9 on the Richter scale. Since then, earthquake preparedness has evolved, leading to seismically sound bridges and buildings, but still scientists don’t know how much damage the next big quake will cause.


Now, a global team of geologists, mathematicians and computer scientists is getting closer to an answer by using a supercomputer to mimic a complex quake involving multiple fault lines. Their work has big implications for how people study and prepare for disasters.

Alexander Heinecke, a researcher at Intel Labs who led the effort, said it demonstrates what’s possible when you combine detailed data and powerful processing. The work was nominated this year for the prestigious Gordon Bell Prize, which is given for achievement in high-performance computing.

Earthquakes are difficult to model because of the hundreds of factors that determine their size and severity. Heinecke, a computer scientist by training, says he was drawn to the “grand challenge” of seeing whether a supercomputer could make sense of all of the different variables.

For his project, Heinecke picked the 7.3 magnitude Landers earthquake that stuck the Mojave Desert in 1992. While no one died in the quake due to the area’s isolation, it’s regarded as one of the most educational earthquakes of all time because the main shock triggered five other adjacent faults, one after the other. Recording stations in the desert captured piles of information that provided a reference point.


To simulate the earthquake, Heinecke and his team built 3D models of the earth and tested different scenarios. The team used the Tianhe-2 supercomputer, currently the fastest in the world, to simulate ground motion with unprecedented levels of accuracy.

The supercomputer, developed by China’s National University of Defense is built with 48,000 Intel Xeon Phi coprocessors. Using just half of the Tianhe-2, the earthquake simulation code used by the team made 8.6 quadrillion calculations per second. The average computer performs about 100 million calculations per second.

Heinecke’s team experimented with the rupture until it learned how the fault line was laying in the earth.

The work is gaining notice among earthquake scientists.

“Ten years ago, computers were preventing us from understanding the ground,” said Brad Aagaard, a research geophysicist for the US Geological Survey (USGS).

“Today, the limiting power is not computing power but our understanding of the earth. We need more observations and sensors out there.”

That’s starting to happen, too. New types of seismic data collection include accelerometers measuring force placed in buildings and in the ground.

UC Berkeley Stadium Earthquake Region

California Memorial Stadium at the University of California, Berkeley, sits right on top of the Hayward Fault. It underwent a $321 million seismic retrofit in 2010.

The University’s Seismological Laboratory created an app called MyQuake, which calculates the shaking intensity at the phone’s location during earthquakes.

Aagaard hopes that with better ground-shake forecasting architects will be able to design buildings that can survive even the biggest earthquakes.

As supercomputing evolves and earthquake simulation becomes more accurate, it could have a big impact on the design of buildings in quake-prone areas like San Francisco.

“When the ground moves during an earthquake, it affects buildings and the systems in it,” said Stephen Mahin, a professor of structural engineering at University of California, Berkeley.

“How the building moves will change the ground, and affect how the next building shakes.”


With more complex buildings being built in San Francisco, researchers are not only using supercomputing to analyze the movement of tall buildings but to also model the effect of whole cities being impacted by earthquakes. Such rich detail and high-level analyses require millions of calculations, just the sort of thing that supercomputers can do.

Simulations are also helping engineers determine whether a building’s foundation needs to be plowed deep into bedrock or spread wide to hold on to more soil.

So what about the big one? The Hayward fault, the main threat to the Bay Area, erupts every 140 years or so, most recently in 1868. So it’s due. Six other nearby faults could also be triggered by a major quake.

Right now, the biggest obstacle is data. Until the Hayward fault ruptures and data show how much of it ruptured and where it ruptured, scientists can prepare for the big one by simulating all the possible scenarios, but no one will know for sure.

“In working on earthquake simulations, the best you can achieve is reasonably accurate scientific outcomes. You can never perfectly simulate real-world outcomes,” Heinecke said.

Feature image of Loma Prieta quake by USGS / Bay Bridge earthquake photo by Joe Lewis, CIR Online


This series explores new ways we use information to empower ourselves, the environment and people around us. We look at how data collected by everything from smartwatches to smart cities is leading to better living through big data.


Related stories:
Are You Data Center Dependent?
Will You Have Your Own API Someday?
Will Augmented Reality Make Us Masters of the Information Age?
Sugar Stabilizing Technology Helping Fight Diabetes
Holiday Shopping Season Brings Innovation in Instant Gratification


Subscribe to iQ by Intel and stay up on innovation everywhere via Flipboard.

The post Supercomputing Shakes Up Earthquake Awareness appeared first on iQ by Intel.