As it happens every year, fall signals the start of flu and COVID season. It’s also a good time to remember the important role supercomputers play in fighting disease. Modeling disease transmission is a classic use of computing power. You’ve probably seen a movie where everyone is looking at a big screen with red transmission rates rapidly filling up the globe. Because the way diseases move through a population is so well studied, these models only get more accurate as more research is performed. These models are constantly evolving as new data is introduced, so our national Centers for Disease Control and Prevention (CDC) relies on a number of resources to create the most accurate simulation, and when it comes to COVID, ACCESS resources have been an integral part of the process.
Madhav Marathe, the director of the Network Systems Science and Advanced Computing (NSSAC) division of the Biocomplexity Institute and Initiative at the University of Virginia, is part of a research group that used Purdue’s Anvil supercomputing cluster to create a relatively new model for simulating the pandemic. In 2004, Marathe and his team spearheaded the agent-based model – a model that treats individual differences between population members as major impacting factors on a pandemic model.
“The idea is,” says Marathe, “if I want to understand how a disease spreads in a population, then I want to understand the social network that underlies this population because I want to capture the underlying heterogeneity that exists. Compartmental mass-action models treat everyone as identical. They lose all aspects of heterogeneity and asymmetry that exist in the real world.”
While including this type of data in a model is ideal and creates a far more accurate model of the impact of human behavior on disease mitigation, it also increases the need for computational power to run the simulations. When trying to accurately predict an ongoing pandemic, time is of the essence, and results are needed quickly.
We need high-performance computing systems…We were here to support an evolving pandemic in terms of operational response. This is not just a science exercise where we can wait for a while, run some experiments, do some analysis, make some interesting scientific findings, etc.… And time is absolutely critical. If they need an answer in three days, then they need an answer in three days. After seven days, the results may not be that useful. So what we needed was access to [HPC] machines that were relatively easy to use, flexible in their usage, and could support this operational real-time requirement.
Madhav Marathe, director, Network Systems Science and Advanced Computing (NSSAC), University of Virginia
The Anvil team at Purdue’s Rosen Center for Advanced Computing (RCAC) received high praise for their work with the research team at NSSAC. Dustin Machi, the senior software architect at NSSAC, said, “Anvil was just nice to use — it was easy to get access to, and the team was great to work with, so we were able to get a configuration set that worked for our style of pipeline really well.”
You can read more about this story here (published May 9): Purdue’s Anvil Supercomputer Assists with COVID-19 Pandemic Response.
If you are on a research team and you think your work could benefit from cyberinfrastructure resources, you can learn more about ACCESS resources and the allocation process here.
Project Details
Resource Provider Institution: Rosen Center for Advanced Computing (RCAC)
Affiliations: University of Virginia, Network Systems Science and Advanced Computing (NSSAC), Biocomplexity Institute and Initiative
Funding Agency: NSF
Grant or Allocation Number(s): 2005632
The science story featured here was enabled by the ACCESS program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296.