Name | Region | Skills | Interests |
---|---|---|---|
Andrew Sherman | ACCESS CSSN, Campus Champions, CAREERS | ||
Michael Blackmon | Campus Champions, ACCESS CSSN | ||
Chris Carothers | CAREERS | ||
Christopher Bl… | Campus Champions | ||
Cody Stevens | Campus Champions, CCMNet | ||
Daniel Howard | ACCESS CSSN, Campus Champions, CCMNet, RMACC | ||
Edwin Posada | Campus Champions | ||
Fan Chen | ACCESS CSSN | ||
Gaurav Khanna | Campus Champions, CAREERS, Northeast, CCMNet | ||
Gil Speyer | ACCESS CSSN, RMACC, Campus Champions | ||
Yu-Chieh Chi | Campus Champions | ||
Jason Wells | ACCESS CSSN, Campus Champions | ||
Katia Bulekova | ACCESS CSSN, Campus Champions, CAREERS, CCMNet, Northeast | ||
Kenneth Bundy | CAREERS | ||
Lonnie Crosby | Campus Champions, ACCESS CSSN | ||
Mohsen Ahmadkhani | CCMNet, ACCESS CSSN | ||
Michael Puerrer | Campus Champions, Northeast | ||
David Reddy | Campus Champions | ||
Ron Rahaman | Campus Champions | ||
Grant Scott | Great Plains | ||
Xiaoqin Huang | ACCESS CSSN | ||
Shaohao Chen | Northeast | ||
Swabir Silayi | ACCESS CSSN, CCMNet, Campus Champions | ||
Shawn Sivy | Campus Champions, CAREERS | ||
Tyler Burkett | Kentucky |
Logo | Name | Description | Tags | Join |
---|---|---|---|---|
Jetstream-2 | Jetstream2 is a transformative update to the NSF’s science and engineering cloud infrastructure and provides 8 petaFLOPS of supercomputing power to simplify data analysis, boost discovery, and… | Login to join |
Title | Date |
---|---|
ACES: GPU Programming (CUDA) | 4/22/25 |
Title | Category | Tags | Skill Level |
---|---|---|---|
Benchmarking with a cross-platform open-source flow solver, PyFR | Tool | finite-element-analysis, benchmarking, parallelization, github, fluid-dynamics, openmpi, c++, cuda, mpi | Intermediate |
Cornell Virtual Workshop | Learning | jetstream, stampede2, cloud-computing, data-analysis, performance-tuning, parallelization, file-transfer, globus, slurm, training, cuda, matlab, python, r, mpi | Beginner, Intermediate, Advanced |
Examples of Thrust code for GPU Parallelization | Learning | parallelization, gpu, cuda | Intermediate, Advanced |
Sea levels are rising (3.7 mm/year and increasing!)! The primary contributor to rising sea levels is enhanced polar ice discharge due to climate change. However, their dynamic response to climate change remains a fundamental uncertainty in future projections. Computational cost limits the simulation time on which models can run to narrow the uncertainty in future sea level rise predictions. The project's overarching goal is to leverage GPU hardware capabilities to significantly alleviate the computational cost and narrow the uncertainty in future sea level rise predictions. Solving time-independent stress balance equations to predict ice velocity or flow is the most computationally expensive part of ice-sheet simulations in terms of computer memory and execution time. The PI developed a preliminary ice-sheet flow GPU implementation for real-world glaciers. This project aims to investigate the GPU implementation further, identify bottlenecks and implement changes to justify it in the price to performance metrics to a "standard" CPU implementation. In addition, develop a performance portable hardware (or architecture) agnostic implementation.
I aim to run a Bayesian Nonparametric Ensemble (BNE) machine learning model implemented in MATLAB. Previously, I successfully tested the model on Columbia's HPC GPU cluster using SLURM. I have since enabled MATLAB parallel computing and enhanced my script with additional lines of code for optimized execution.
I want to leverage ACCESS Accelerate allocations to run this model at scale.
The BNE framework is an innovative ensemble modeling approach designed for high-resolution air pollution exposure prediction and spatiotemporal uncertainty characterization. This work requires significant computational resources due to the complexity and scale of the task. Specifically, the model predicts daily air pollutant concentrations (PM2.5 and NO2 at a 1 km grid resolution across the United States, spanning the years 2010–2018. Each daily prediction dataset is approximately 6 GB in size, resulting in substantial storage and processing demands.
To ensure efficient training, validation, and execution of the ensemble models at a national scale, I need access to GPU clusters with the following resources:
In addition to MATLAB, I also require Python and R installed on the system. I use Python notebooks to analyze output data and run R packages through a conda environment in Jupyter Notebook. These tools are essential for post-processing and visualization of model predictions, as well as for running complementary statistical analyses.
To finalize the GPU system configuration based on my requirements and initial runs, I would appreciate guidance from an expert. Since I already have approval for the ACCESS Accelerate allocation, this support will help ensure a smooth setup and efficient utilization of the allocated resources.
University of Rhode Island
Campus Champions, Northeast
research computing facilitator
North Carolina State University at Raleigh
Campus Champions
research computing facilitator, research software engineer
University of Utah
RMACC, Campus Champions
mentor, research computing facilitator