Fluid Numerics Cloud

Quick Start

Looking to experiment with operating your own HPC cluster? Our Google Cloud Marketplace is a great place to get started with a click-to-deploy solution. Within 30 minutes, you can be running HPC and HTC applications using all of Google's datacenters worldwide.

Learn More

HPC Cluster with Terraform

Want to build out more complex infrastructure with a cloud-native HPC cluster and manage your resources using infrastructure-as-code? Use our terraform modules and examples to deploy and manage your fluid-slurm-gcp cluster with other infrastructure components.

Learn more

Fully Managed HPC Cluster

Let us help you! Simply let us know what you want to see in a HPC cluster. We will take care of provisioning Cloud Identity accounts, secure IAM policies, networking infrastructure, and your cloud-native HPC cluster. When ready, you'll be able to ssh to your cluster like a traditional HPC system.

Learn more

The Basics

The Fluid Numerics Cloud cluster (fluid-slurm-gcp) is an elastic High Performance Computing Cluster powered by Google Cloud Platform.

Users can access the cluster via ssh and can schedule jobs to run on compute nodes with SchedMD's Slurm job scheduler.

Compute nodes are provisioned on-the-fly and are removed when they are idle. This elasticity keeps compute costs low on the cloud by providing only the compute resources that are needed exactly when they are needed.

HPC packages are made available through environment modules. We can provide software stack's built with open-source compilers.

What We Do

At Fluid Numerics, we provide service and support to Google Cloud Platform based HPC Systems operating the Slurm Workload Manager. We have developed a functional template utilizing Slurm providing our internal and external teams a resource to automatically scale to meet the demands of differing scientific workload instead of maintaining on premise resources that might sit idle otherwise. Our product offering expands into consultation within procurements of large scale Accelerated, HPC and Scientific Computing specifications. We work with teams at major manufacturers and platform providers to maintain knowledge of bleeding edge and emerging advancements in order to function within the current state of industry.

Accelerated Scientific High Performance Computing on Google Cloud Platform

Fluid Numerics is proud to offer our services and support to clients needing assistance developing and utilizing High Performance Computing Cloud systems on GCP. We are continuously growing our systems to accommodate new industries and workflows. Our heterogeneous cloud cluster scales to fit your computational needs using SLURM Workload Manager brought to and maintained by the open source community based on this SchedMD product. Teams at Fluid Numerics are encouraged to approach projects with research and science in mind. We have adopted an Infrastructure as Code (IaC) principle allowing rapid flexibility in our administration and management processes. Documentation is a top priority as we engage in new spaces within and outside of our own organization.

We invite you to take some time and check out what the Fluid Numerics Cloud is all about by reading some more or contacting us directly.

Learn More about Operating FLUID-SLURM-GCP

In addition to direct support we also provide a full suite of interactive training modules in the form of Codelabs and help documentation. Our cluster services configuration component helps to streamline system administration and enable rapid deployment and scalability.

Get a cluster going today with a Codelab:

Fluid-Slurm-GCP v2.5.0 with Ubuntu upgrade and REST API features

9/15/20 - v2.5.0 updates Ubuntu to the latest release and enables Slurm REST API connectivity :

  • Ubuntu 19.04 to Ubuntu 20.04

  • CentOS Kernel upgrade

  • Nvidia GPU Drivers upgrade

  • Build and enable Slurm REST API support

Fluid-Slurm-GCP v2.4.0 on GCP Marketplace, Terraform, and Fully Managed Services!

7/3/20 - Fluid Numerics has release v2.4.0 of the fluid-slurm-gcp images. This release comes with with following updates & upgrades

  • Slurm 19.05 to Slurm 20.02

  • Add support for easy CloudSQL integration

  • GSuite SMTP Email Relay Integration support for email notification on job completion

  • Terraform modules and examples now publicly available!

  • (bugfix) Enabled storage.full auth-scope for GCSFuse

Fluid-Slurm-GCP-Ubuntu is on GCP Marketplace

3/11/20 - Fluid Numerics has released another flavor of fluid-slurm-gcp on GCP Marketplace that is based on the Ubuntu operating system!

In addition to the flexible multi-project/region/zone of "classic" fluid-slurm-gcp, the fluid-slurm-gcp-ubuntu solution includes

  • Ubuntu 19.10 Operating System

  • zfs-utils for ZFS filesystem management (but no Lustre kernels)

  • apt package manager

  • Environment modules, Spack, and Singularity (same as the classic fluid-slurm-gcp)

Slurm+OpenHPC is on GCP Marketplace

3/3/20 - We now have another flavor of fluid-slurm-gcp on GCP Marketplace with pre-installed OpenHPC packages.

In addition to the flexible multi-project/region/zone of "classic" fluid-slurm-gcp, the fluid-slurm-gcp+openhpc solution includes

  • lmod Environment modules

  • GCC 8.2.0 compilers

  • MPI-ready w/ MPICH installed (MVAPICH & OpenMPI available soon)

  • Serial and Parallel IO Libraries (HDF5, NetCDF, Adios)

  • HPC Profilers/Performance Tuning Toolkits (Score-P, Tau, Scalasca)

  • Scientific libraries for HPC ( MFEM, PETSc, Trilinos, and much more!)

Get Started Now with Multi-Project SLURM on GCP Marketplace

Testing of our Multi-Project enhancement of the FLUID-SLURM-GCP marketplace solution has been wrapped up and teams from around the world are already spinning up clusters to run their HPC applications.

2/11/20 Release Notes:

  • Multi-project support - naturally integrates with Fluid Numerics fluid-slurm-gcp terraform scripts

  • Multi-region partition configuration (Globally scalable)

  • Multi-zone partition configuration (High availability configuration)

  • Multi-machine partitions - add as many compute machines to a partition as you need

  • "Name your machines" - choose the name of the machines in your partition, rather than {deployment}-compute-{id}

  • Slurm accounting exposure in cluster-services - manage users and the partitions they have access to with cluster-services

Documentation for the new cluster-config schema can be found on our cluster-config help page.