ACCESS at PEARC24

By Megan Johnson, NCSA
Both the ACCESS logo and the PEARC24 logo are displayed in this image. The text says, ACCESS at PEARC24.

This year, the Practice and Experience in Advanced Research Computing (PEARC) conference will be held in Providence, RI, from Sunday, July 21 to Thursday, July 25. The PEARC conference is dedicated to providing a forum for the research community to discuss challenges and opportunities. If you’re attending PEARC24, you’ll see ample ACCESS representation. ACCESS will be at Booth 11 in the Exhibitor Hall, where you can stop by and speak with ACCESS staff about the program and be the first to pick up its Plan Year 2 Highlights Book. Below is the list of ACCESS-affiliated presentations with times and locations to add to your calendar. Descriptions of each event are abridged. To learn more, you can find complete descriptions on the full PEARC agenda here. All times listed are Eastern Time (ET). We look forward to seeing you there!

Jump to: Tuesday, Wednesday or Thursday

Monday, July 22

Led by NCSA’s Bruno Abreu and Santiago Núñez-Corrales, this workshop will provide participants with a comprehensive understanding of the current status and the prospects of quantum computing (QC) and its applications, focusing on how it can benefit the broader community interested in integrating quantum technologies into their traditional research computing facilities. Participants will engage in interactive discussions and real-world case studies, fostering a collaborative environment for knowledge exchange that will mark the onset of quantum computing awareness in the PEARC Conference Series. The workshop’s agenda can be found here.


The Accelerating Computing for Emerging Sciences (ACES) computing platform, funded by NSF’s  ACCESS program, and hosted at Texas A&M University, features a variety of high-performance computing accelerators, including the Intel Data Center GPU Max 1100 (PVC-GPU). This tutorial will instruct participants on how to access ACES’ PVC-GPUs and demonstrate how they excel with Artificial Intelligence/Machine Learning (AI/ML) and molecular dynamics workflows. The Open OnDemand (OOD) portal facilitates access, and a series of hands-on exercises will introduce AI/ML models using PyTorch and TensorFlow frameworks, and molecular dynamics simulation benchmarks with LAMMPS. Participants will learn how to modify TensorFlow and PyTorch models for use with the ACES Intel PVC GPUs and run molecular dynamics simulations on them.


This tutorial presents (and users will practice with) supercomputer tools specifically designed for complex user environments (Lmod, Sanity Tool), tools for workflow management (ibrun, launcher, launcher-GPU, Pylauncher), tools for job monitoring and profiling (Remora, TACC-Stats, core_usage, amask, etc.) and several other convenient tools. Attendees will learn the function and operation of these tools to make their supercomputer processing more comprehensible and understandable – to benefit from these intelligible, easy-to-use, powerful tools. Detailed hands-on exercises, developed from years of feedback, provide use-case scenarios for using these tools. Exercises will be performed on Stampede3 and/or Frontera supercomputers at the Texas Advanced Computing Center (TACC).


This tutorial caters to both users and facilitators eager to deepen their understanding of High-Throughput Computing (HTC). HTC specializes in managing workloads that involve a vast number of small, individual tasks. This tutorial explores a collection of essential HTC tools and methodologies. These include the Pegasus Workflow Management System, which simplifies the process of managing HTC tasks, and the OSG OSPool, a comprehensive distributed computing platform. Additionally, it will cover the practical aspects of deploying HTC workloads on ACCESS Resources, specifically focusing on platforms like PSC Bridges2 and IU JetStream2. This tutorial aims to provide a thorough overview, equipping you with the knowledge to effectively utilize HTC in projects. Pegasus is a workflow system and is now an integral part of the ACCESS Support offerings.


The session leaders, all part of the Open OnDemand development team, will begin the tutorial with a short overview of Open OnDemand. They’ll spend the first part of the tutorial demoing the features of OOD. The second half will be spent on examples of customizing OOD and configuring interactive apps. It will end with an overview of the development roadmap for Open OnDemand discussion with tutorial participants regarding their specific needs and communities.


The Campus Champions (CC) community has been functioning as an independent entity while partnering with other entities in the Research Computing and Data (RCD) ecosystem. These partnerships foster a dynamic and connected community of advanced research computing professionals that promote leading practices at the frontiers of research, scholarship, teaching and industry application. This Campus Champion Coordinated workshop will cover: 

  1. Presentation of the new targets and goals for the CC community based on our new mission statement 
  2. Activities and strategies for growing the CC role
  3. Leveraging interactions with synergistic communities such as ACCESS and Coalition for Academic Scientific Computation (CASC) 
  4. Investigating collaborative funding opportunities for the Campus Champions’ collaborative activities

High-resolution imaging instruments such as cryogenic electron microscopes and synchrotron beamlines require automation of data flows to increase throughput and to keep the instrument highly utilized. Combined with the increasingly collaborative nature of research, this necessitates infrastructure that makes the resulting data products more FAIR – findable, accessible, interoperable and reusable. Presentation topics include scenarios from research universities and national facilities that illustrate common use cases, highlights of recurring researcher requirements and descriptions of the solutions that were developed in response. Attendees will have the opportunity to experiment with services and tooling used in these solutions.


This half-day tutorial introduces the Globus Platform: a suite of APIs and cloud-hosted services designed to simplify development and maintenance of data-intensive research applications. The platform is offered as a cloud-hosted service by the University of Chicago, so it can be used by researchers anywhere without installation or maintenance.


This half-day tutorial will provide an introduction to and guided exercises for using Jetstream2. Jetstream2 is a flexible, user-friendly cloud computing environment designed for everyone – from researchers with minimal high-performance computing experience to software engineers looking for the latest in cloud-native approaches. Attendees will learn details about Jetstream2, including use cases and best practices, as well as information about the U.S. National Science Foundation’s ACCESS program, which supports Jetstream2. By the end of this tutorial, attendees will be able to log into Jetstream2, create and launch a virtual machine, attach a volume for data storage and know how to apply best practices for instance management.

Note: Attendees will need to register for an ACCESS user ID prior to this tutorial in order to participate in the hands-on activities.


Currently, in addition to many general-purpose HPC and GPU resources available for the advanced research computing (ARC) community, there are novel accelerator hardware-based supercomputers and resources being built and made available to the science and engineering community. Presenters will discuss the ideas behind such hardware, project plans that are developed jointly with vendors of these systems and the architecture of such machines, including processor cores, accelerators, interconnects, file systems, etc. Attendees will get insights into how the systems are managed, what cluster management tools and approaches are used, what system monitoring tools are used, how the I/O subsystems are architected and managed, what batch systems are used and how user accounts and allocations are managed for researchers.


This half-day tutorial will introduce participants to Globus Compute, a federated Function as a Service (FaaS) platform that provides fire-and-forget execution across the computing continuum. The intuitive FaaS interface makes it easy for users to define workloads (as a collection of Python functions) and to scale execution. The tutorial will include four sessions, with hands-on components to use Globus Compute, install endpoint software on various resources and implement a modern machine learning application.


Cybertraining materials abound, but they can be difficult to find, and often have little information about the quality or relevance of offerings. The HPC-ED pilot is building a platform that gives resource providers, campus portals, schools and other institutions the ability to both incorporate training from multiple sources into their own familiar interface and publish their locally curated training materials to the greater cyberinfrastructure community. In this three-hour tutorial, participants will walk through examples of how to share training material by publishing metadata to the HPC-ED cybertraining discovery platform, and how to enhance their local project or institutional researcher support portals by selecting and incorporating appropriate material found in the discovery platform.

Tuesday, July 23


Authors: David Hart, Nathan Tolbert, Rob Light and Stephen Deems

The eXtensible Resource Allocation Service (XRAS), a comprehensive allocations environment for managing the submission, review and awarding of resource allocations, has been essential to managing the resource allocation needs of the national cyberinfrastructure for the past decade. As a software-as-a-service platform, XRAS supports not only the needs of its primary stakeholder program but also the processes of six other national and regional resource providers. Today, XRAS supports a core workload of 500 allocation requests each quarter, and the development roadmap for the system is focused on expanding the types of resources XRAS supports, enabling resources to be integrated in novel ways, and increasing the number of resource federations that XRAS can support.


Authors: Joseph White, Aaron Weeden, Robert Deleon, Thomas Furlani and Matthew D. Jones 

ACCESS is a program established and funded by the National Science Foundation to help researchers and educators use the NSF national advanced computing systems and services. Here the authors present an analysis of the usage of ACCESS-allocated cyberinfrastructure over the first 16 months of the ACCESS program, September 2022 through December 2023. For historical context, they include analyses of ACCESS and XSEDE, its NSF-funded predecessor, from January 2014 through December 2023. The analyses include batch compute resource usage, cloud resource usage, science gateways, allocations and users.


This session is presented by ACCESS Support co-PI Alana Romanella. In order to effectively democratize access to cyberinfrastructure for a diverse set of individuals, a clear understanding of their needs is critical. Many research computing and data centers are eager to engage a variety of individuals from many communities or institutions with varying levels of success. Often, assumptions are being made about why a potential user might want to use cyberinfrastructure and awareness campaigns are tailored around these assumptions. This session will feature short presentations from panels representing a variety of communities in the ecosystem discussing driving factors for engagement in cyberinfrastructure. A discussion will follow to understand audience perspectives and invite possible solutions.


ACCESS Support is hosting a three-hour Hackathon for our CSSN (Computational Science Support Network) staffed by ACCESS members, Campus Champions and CSSN members. Members of the public, including students, faculty and staff participating at PEARC24, are welcome to join.


Authors: Joshua Martin, Catherine Feldman, Alan Calder, Tony Curtis, Eva Siegmann, David Carlson, Raul Gonzalez, Daniel Wood, Robert Harrison and Firat Coskun

Astrophysical simulations are computation and memory, thus energy-intensive, requiring new hardware advances for progress. Stony Brook University recently expanded its computing cluster “Seawulf” with an addition of 94 new nodes featuring Intel Sapphire Rapids Xeon Max series CPUs. The authors present a performance and power efficiency study of this hardware performed with FLASH: a multi-scale, multi-physics, adaptive mesh-based software instrument. They extend this study to compare performance to that of Stony Brook’s Ookami testbed, which features ARM-based A64FX processors, and Seawulf’s AMD EPYC Milan and Intel Skylake nodes.


The National Science Foundation’s Office of Advanced Cyberinfrastructure (OAC) has defined a vision and investment plans for cyberinfrastructure (CI) that address the evolving needs of the science and engineering research and education community nationwide. The panelists will include OAC leadership presenting current OAC strategy initiatives and program directors who lead OAC’s program areas. Presentation topics will include an overview of recent funding opportunities in program areas such as: 

  • Advanced Computing Systems
  • Learning and Workforce Development
  • Data and Software
  • Networking and Cybersecurity

Panelists will also talk about current initiatives such as the National Artificial Intelligence Research Resource (NAIRR) Pilot and the National Discovery Cloud for Climate (NDC-C) and highlight programs created to enhance the accessibility, inclusivity and sustainability of research CI towards further democratization of the national CI ecosystem.


Authors: Alan Chalker, Robert Deleon, David Hudak, Douglas Johnson, Julie Ma, Jeff Ohrstrom, Hazel Randquist, Travis Ravert, Joseph White, Matt Walton, Emily Moffat Sadeghi and Lee Liming 

First introduced in 2013, Open OnDemand is an innovative, open-source, web-based portal that removes the complexities of research computing (RC) system environments from the en-client, and in so doing, reduces “time to science” for researchers by facilitating their access to RC resources. This paper describes advances to the Open OnDemand platform since it was publicly released to the research computing world in 2017, the community that has developed around it and plans to leverage these to build an ecosystem to ensure future sustainability of this popular platform.


The goal of this BoF is to provide a forum for the Open OnDemand (OOD) community to exchange experiences and best practices, as well as to engage with the project development team.


ACCESS Monitoring and Measurement Service (MMS) is an NSF-funded project that supports the comprehensive management of the NSF ACCESS program and its associated resources, as well as HPC and CI systems in general. It does so primarily through the ACCESS XDMoD and Open XDMoD tools, which track operational, performance and usage data for ACCESS and local HPC systems, respectively. This BoF program will introduce new users to XDMoD, inform experienced users of new developments in XDMoD and provide a forum for knowledge exchange between XDMoD users.


Hosted by the National Science Foundation’s Office of Advanced Cyberinfrastructure (OAC), this interactive discussion will cover recent initiatives and broader programmatic goals covering:

  • The National Artificial Intelligence Research Resource pilot The National Discovery Cloud for Climate
  • Learning & Workforce Development
  • Broadening participationInstitutional recognition of Cyberinfrastructure Professionals (CIPs)
  • Networks and community-building efforts 

Wednesday, July 24

Author: Nikolay A. Simakov

HPC resources are used for compute-demanding calculations in various fields of science and engineering. Users’ projects are often too big to be executed in one chunk, so they have to split them into a number of batch jobs. This work aims to study users’ interaction with HPC queuing systems and how different strategies can improve the users’ experience (for example, shorter wait times) and overall system utilization.  It will examine how users’ knowledge of available resources can affect individual job wait time, the overall time to complete a project and resource utilization. The model parameters were extracted from historical jobs on XSEDE and ACCESS resources and followed a typical Molecular Dynamics simulation project. The model is implemented in Julia language and Agents.jl framework.


Campus Champions is dedicated to fostering a vibrant community of research computing and data professionals committed to enabling research across diverse institutions. Its mission is to facilitate seamless access to and utilization of local, regional and national resources and technologies, fostering collaboration, knowledge-sharing and support. CC envisions a community where every campus has a Campus Champion, acting as a force multiplier for research computing and data management, and where no researcher is without guidance.


Globus Compute implements a hybrid Function as a Service (FaaS) model in which users use a single cloud-hosted service to manage the execution of Python functions on user-owned and managed Globus Compute endpoints deployed on arbitrary compute resources. The multi-user endpoint is designed to provide the security interfaces necessary for deployment on large, shared HPC clusters by, for example, restricting user endpoint configurations, enforcing various authorization policies and via customizable identity-username mapping.


Data portals are web applications that facilitate data discovery, access and sharing. They’re essential to meet the FAIR data principles and for advancing open science, fostering interdisciplinary collaborations, and enhancing the reproducibility of research findings. This presentation will showcase a novel zero-code and infrastructure approach to simplify and accelerate the creation and customization of data portals.


Authors: Rodrigo Ristow Hadlich, Gaurav Verma, Tony Curtis, Eva Siegmann and Dimitris Assanis

Decarbonization of transportation requires new deep-learning models to enable improved engine control.  Research and development must also be done in a computationally efficient manner, so of interest is to understand the applicability of HPC resources to be used for training machine learning models and to assess both the power consumption and temporal performance of new A64FX architectures to traditional x86 architectures. The paper details the development of a Multilayer Perceptron (MLP) model, using the Fujitsu A64FX processor, for predicting in-cylinder pressure time histories of internal combustion engines, a critical performance parameter to develop pathways for decarbonized engine controls.


Authors: Elizabeth Brooks, Sheri Sanders and Michael Pfrender

This session is an introduction to freeCount. With the freeCount analysis framework students and researchers are guided through the iterative steps of count data assessment, processing and analysis in a visual environment. The freeCount analysis framework takes advantage of the reactive features of R Shiny to deliver a set of modular and interactive tools and tutorials for the structured analysis and visualization of count data.


In this Birds of a Feather session, the ACCESS Resource providers (RP) will give a brief overview of the available resources and their unique characteristics. The presentation portion of this BoF will highlight the variety of available resources, followed by a discussion with the community, allowing the audience to directly interact with the RP representatives. 


Funded by the NSF’s West Big Data Innovation Hub and the Pala Band of Mission Indians, this poster illustrates ways that a team with Pala Band of Mission Indians and San Diego Supercomputer Center at UC San Diego has worked to create a Cupeno Science Corner at the reservation’s youth center. The poster outlines the activities ranging from an aquaponics station to an environmental sensor station. PEARC24 participants who attend will learn about how the Cupeno Science Corner was created, the interest generated from the local youth to engage and the importance of including Native languages when encouraging STEM fields to reservation youth.

Thursday, July 25

This BoF will bring together researchers interested in learning more about the advanced network capabilities available to them on key national network infrastructure that interconnect the national supercomputing centers and research facilities. Attendees will hear about ways these features can be used to enhance their own research, improve their end-to-end data transfer experience and exchange thoughts and experiences with other researchers facing similar big data challenges.

Sign up for ACCESS news and updates.

Receive our monthly newsletter with ACCESS program news in your inbox. Read past issues.