Fluid-Slurm-GCP is now integrated with your RCC

Simple Linux Utility For Resource Management(SLURM) for a Research Computing Cloud on Google Cloud

Quick Start

Looking to experiment with operating your own Research Computing cluster? Our Google Cloud Marketplace is a great place to get started with a click-to-deploy solution. Within 30 minutes, you can be running HPC and HTC applications using all of Google's datacenters worldwide.

Learn More

RCC Cluster with Terraform

Want to build out more complex infrastructure with a cloud-native Research Computing cluster for HPC or HTC workloads and manage your resources using infrastructure-as-code? Use our terraform modules and examples to deploy and manage your RCC with other infrastructure components.

Learn more

Fully Managed RCC Cluster

Let us help you! Simply let us know what you want to see in a RCC cluster. We will take care of provisioning Cloud Identity accounts, secure IAM policies, networking infrastructure, and your cloud-native RCC cluster. When ready, you'll be able to ssh to your cluster like a traditional Research Computing system.

Learn more

The Basics

The Fluid Numerics Research Computing Cloud cluster (with tools like fluid-slurm-gcp, fluid-ci/cb and fluid-run) is an elastic Research Computing Cluster powered by Google Cloud Platform.

Users can access the cluster via ssh and can schedule jobs to run on compute nodes with SLURM job scheduler.

Compute nodes are provisioned on-the-fly and are removed when they are idle. This elasticity keeps compute costs low on the cloud by providing only the compute resources that are needed exactly when they are needed.

HPC packages are made available through environment modules. We can provide software stack's built with open-source compilers.

What We Do

At Fluid Numerics, we provide service and support to Google Cloud Platform based HPC Systems operating the Slurm Workload Manager. We have developed a functional template utilizing Slurm providing our internal and external teams a resource to automatically scale to meet the demands of differing scientific workload instead of maintaining on premise resources that might sit idle otherwise. Our product offering expands into consultation within procurements of large scale Accelerated, HPC and Scientific Computing specifications. We work with teams at major manufacturers and platform providers to maintain knowledge of bleeding edge and emerging advancements in order to function within the current state of industry.

Accelerated Scientific High Performance Computing on Google Cloud Platform

Fluid Numerics is proud to offer our services and support to clients needing assistance developing and utilizing High Performance Computing Cloud systems on GCP. We are continuously growing our systems to accommodate new industries and workflows. Our heterogeneous cloud cluster scales to fit your computational needs using SLURM Workload Manager brought to and maintained by the open source community based on this SchedMD product. Teams at Fluid Numerics are encouraged to approach projects with research and science in mind. We have adopted an Infrastructure as Code (IaC) principle allowing rapid flexibility in our administration and management processes. Documentation is a top priority as we engage in new spaces within and outside of our own organization.

We invite you to take some time and gain some experience with the Fluid Numerics Research Computing Cloud using our documentation to get started and contacting us directly for assistance if you need.

Learn More about Operating FLUID-SLURM-GCP

In addition to direct support we also provide a full suite of interactive training modules in the form of Codelabs and help documentation. Our cluster services configuration component helps to streamline system administration and enable rapid deployment and scalability.

Get a cluster going today with a Codelab:


Fluid-Slurm-GCP deprecation and migration to RCC


  • We recommend the following paths to an updated and supported release:

    • fluid-slurm-gcp-centos-*-v3* , replace with rcc-centos-7-v300-256bf0b

    • fluid-slurm-gcp-ubuntu-*-v3* , replace with rcc-ubuntu-2004-v300-1104600

    • fluid-slurm-gcp-ohpc-* , replace with rcc-centos-7-v300-256bf0b

  • If you need assistance with your migration from legacy Fluid-Slurm-GCP to an RCC image please reach out to support@fluidnumerics.com for assistance

Fluid-Slurm-GCP v3.0.0 with Compute Node Image Upgrades and Core Software Stack


  • Update fluidnumerics/slurm-gcp fork with image-based schedmd/slurm-gcp.

  • The cluster-config schema has been rebased off of schedmd/slurm-gcp . This was done to smooth the transition between the open-source solution and the supported and licensed fluid-slurm-gcp. The cluster-services CLI has been updated to be consistent with this schema.

  • Add slurm_qos options to cluster-config and cluster-services support for building QOS for alignment with slurm accounts.

  • Update spack version (to v0.16.2)

  • Install GCC 7.5.0, GCC 8.5.0, GCC 9.4.0, GCC 10.2.0, and the Intel OneAPI Compilers v2021.2.0

  • Install OpenMPI 4.0.5 for each compiler

  • Update Singularity version (to v3.7.4)

  • Add support for GVNIC

  • Added the HPC VM Image Library with the applications listed below tested and readily available.

    • WRF v4.2

    • Gromacs v2021.2

    • OpenFOAM (org) v8

    • Paraview 5.9.1

    • FEOTS v2

    • SELF v1.0.0

Fluid-Slurm-GCP v2.6.0 with Compute Node Image Upgrades and Core Software Stack


  • [Resolve : ROCm Spack builds difficult to use with 3rd party apps] : Move ROCm install to /opt/rocm via yum repositories

  • [Improve sysctl.conf for large MPI jobs] : Increase net.core.somaxconn, net.ipv4.tcp_max_syn_backlog, and fs.file-max


Fluid-Slurm-GCP v2.5.0 with Ubuntu upgrade and REST API features

9/15/20 - v2.5.0 updates Ubuntu to the latest release and enables Slurm REST API connectivity :

  • Ubuntu 19.04 to Ubuntu 20.04

  • CentOS Kernel upgrade

  • Nvidia GPU Drivers upgrade

  • Build and enable Slurm REST API support

Fluid-Slurm-GCP v2.4.0 on GCP Marketplace, Terraform, and Fully Managed Services!

7/3/20 - Fluid Numerics has release v2.4.0 of the fluid-slurm-gcp images. This release comes with with following updates & upgrades

  • Slurm 19.05 to Slurm 20.02

  • Add support for easy CloudSQL integration

  • GSuite SMTP Email Relay Integration support for email notification on job completion

  • Terraform modules and examples now publicly available!

  • (bugfix) Enabled storage.full auth-scope for GCSFuse

Fluid-Slurm-GCP-Ubuntu is on GCP Marketplace

3/11/20 - Fluid Numerics has released another flavor of fluid-slurm-gcp on GCP Marketplace that is based on the Ubuntu operating system!

In addition to the flexible multi-project/region/zone of "classic" fluid-slurm-gcp, the fluid-slurm-gcp-ubuntu solution includes

  • Ubuntu 19.10 Operating System

  • zfs-utils for ZFS filesystem management (but no Lustre kernels)

  • apt package manager

  • Environment modules, Spack, and Singularity (same as the classic fluid-slurm-gcp)

Slurm+OpenHPC is on GCP Marketplace

3/3/20 - We now have another flavor of fluid-slurm-gcp on GCP Marketplace with pre-installed OpenHPC packages.

In addition to the flexible multi-project/region/zone of "classic" fluid-slurm-gcp, the fluid-slurm-gcp+openhpc solution includes

  • lmod Environment modules

  • GCC 8.2.0 compilers

  • MPI-ready w/ MPICH installed (MVAPICH & OpenMPI available soon)

  • Serial and Parallel IO Libraries (HDF5, NetCDF, Adios)

  • HPC Profilers/Performance Tuning Toolkits (Score-P, Tau, Scalasca)

  • Scientific libraries for HPC ( MFEM, PETSc, Trilinos, and much more!)

Get Started Now with Multi-Project SLURM on GCP Marketplace

Testing of our Multi-Project enhancement of the FLUID-SLURM-GCP marketplace solution has been wrapped up and teams from around the world are already spinning up clusters to run their HPC applications.

2/11/20 Release Notes:

  • Multi-project support - naturally integrates with Fluid Numerics fluid-slurm-gcp terraform scripts

  • Multi-region partition configuration (Globally scalable)

  • Multi-zone partition configuration (High availability configuration)

  • Multi-machine partitions - add as many compute machines to a partition as you need

  • "Name your machines" - choose the name of the machines in your partition, rather than {deployment}-compute-{id}

  • Slurm accounting exposure in cluster-services - manage users and the partitions they have access to with cluster-services

Documentation for the new cluster-config schema can be found on our cluster-config help page.


Started from : github.com/SchedMD/slurm-gcp (Apache 2)

March 2020 Cluster-config v2

Multi-regional, multi-project, Ubuntu and OpenHPC

Fluid-Slurm-GCP Timeline

July 2019 - cluster-services introduced

September 2020 v2.5.0

December 2018 Multi-partition

June 2020 Terraform modules, Managed Services

November 2019

Image-based /

Marketplace Launch