Run OpenFOAM® on Google Cloud
Fully Managed HPC Cluster
Let us help you! Simply let us know what you want to see in a HPC cluster. We will take care of provisioning Cloud Identity accounts, secure IAM policies, networking infrastructure, and your cloud-native HPC cluster. When ready, you'll be able to ssh to your cluster like a traditional HPC system.
The CFD-GCP solution is an auto-scaling High Performance Computing Cluster helps you quickly run OpenFOAM® on Google Cloud Platform. This solution comes with open-source mesh generation software (gmsh), OpenFOAM®, and Paraview to quickly enable complete CFD workflows on GCP.
Users can access the cluster via ssh and can schedule CFD workloads to run on compute nodes with SchedMD's Slurm job scheduler.
Compute nodes are provisioned on-the-fly and are removed when they are idle. This elasticity keeps compute costs low on the cloud by providing only the compute resources that are needed exactly when they are needed.
Why use Cloud-CFD ?
Easy to get started
Cloud-CFD is click to deploy. All you need is an active Google Cloud account on Google Cloud to get started. Cloud-CFD deploys infrastructure on Google Cloud with meshing tools (GMSH & PyGMSH), OpenFOAM®, and Paraview pre-installed using a GCC/9 and OpenMPI 4 stack.
We've provided an example batch file for quickly running the NACA0012 OpenFOAM® tutorial to give you a solid starting-point for running your CFD workloads. Additionally, you can use our Paraview Server Connection file to easily connect your cluster to Paraview Client on a local workstation.
On-premise HPC platforms are fixed capacity systems. During low utilization times organizations must continue to pay operational expenses, which reduces value for the expense. During peak usage, users experience longer queue times which can lead to workflow disruptions for researchers and missed opportunities. Cloud-CFD leverages an auto-scaling feature so that you don't have to pay for idle compute node time.
Software packages on Cloud-CFD are managed using a similar methodology to most HPC centers. Software stacks are installed on an NFS file system and are exposed to users through environment modules. Users can bring toolkits into and out of their paths using simple module load and module unload commands. If Cloud-CFD doesn't have quite everything you need, you can install your own packages using these traditional HPC methodologies, or create custom VM images to deploy on your compute nodes.
Out-of-the box, Cloud-CFD provides you with three Slurm partitions to facilitate pre-processing/mesh generation, CFD simulation with OpenFOAM, and post processing with Paraview. Cloud-CFD also comes with an easy-to-use CLI (cluster-services) to customize your cluster's compute nodes, external storage mounts, and basic Slurm accounting. In contrast to most on-premise HPC systems, Google Cloud provides you access to a heterogeneous array of CPU and GPU platforms. The cluster-services CLI exposes this heterogeneity directly to Slurm and the HPC cluster, allowing you to easily customize the compute instances available to your cluster and organize those machines into Slurm partitions.
In addition to the compute flexibility, Cloud-CFD comes ready with the necessary drivers to mount Lustre and NFS filesystems in addition to Google Cloud Storage buckets.
Since version 2.5.0 of Fluid Numerics's cluster-services, Cloud-CFD is able to deploy autoscaling compute nodes to multiple VPC subnetworks. This, in turn, gives you the ability to leverage Google datacenters worldwide through the familiar Slurm interface.
As a government contractor, we understand the need to be able to attribute expenses to specific cost codes. Cloud-CFD allows you to deploy compute nodes from a single cluster to any Google Cloud project within your organization. This allows you to seamlessly parse your bills from Google and attribute cloud expenses to specific cost centers.