Fluid Numerics Cloud

The Basics

The Fluid Numerics Cloud cluster is an elastic High Performance Computing Cluster powered by Google Cloud Platform.

Users can access the cluster via ssh and can schedule jobs to run on compute nodes with SchedMD's Slurm job scheduler.

Compute nodes are provisioned on-the-fly and are removed when they are idle. This elasticity keeps compute costs low on the cloud by providing only the compute resources that are needed exactly when they are needed.

HPC packages are made available through environment modules. We can provide software stack's built with open-source compilers.

What We Do

At Fluid Numerics, we provide service and support to Google Cloud Platform based HPC Systems operating the Slurm Workload Manager. We have developed a functional template utilizing Slurm providing our internal and external teams a resource to automatically scale to meet the demands of differing scientific workload instead of maintaining on premise resources that might sit idle otherwise. Our product offering expands into consultation within procurements of large scale Accelerated, HPC and Scientific Computing specifications. We work with teams at major manufacturers and platform providers to maintain knowledge of bleeding edge and emerging advancements in order to function within the current state of industry.

Accelerated Scientific High Performance Computing on Google Cloud Platform

Fluid Numerics is proud to offer our services and support to clients needing assistance developing and utilizing High Performance Computing Cloud systems on GCP. We are continuously growing our systems to accommodate new industries and workflows. Our heterogeneous cloud cluster scales to fit your computational needs using SLURM Workload Manager brought to and maintained by the open source community based on this SchedMD product. Teams at Fluid Numerics are encouraged to approach projects with research and science in mind. We have adopted an Infrastructure as Code (IaC) principle allowing rapid flexibility in our administration and management processes. Documentation is a top priority as we engage in new spaces within and outside of our own organization.

We invite you to take some time and check out what the Fluid Numerics Cloud is all about by reading some more or contacting us directly.

Learn More about Operating FLUID-SLURM-GCP

In addition to direct support we also provide a full suite of interactive training modules in the form of Codelabs and help documentation. Our cluster services configuration component helps to streamline system administration and enable rapid deployment and scalability.

Get a cluster going today with a Codelab:

Slurm-GCP-Ubuntu is on GCP Marketplace

3/11/20 - Fluid Numerics has released another flavor of fluid-slurm-gcp on GCP Marketplace that is based on the Ubuntu operating system!

In addition to the flexible multi-project/region/zone of "classic" fluid-slurm-gcp, the fluid-slurm-gcp-ubuntu solution includes

  • Ubuntu 19.10 Operating System
  • zfs-utils for ZFS filesystem management (but no Lustre kernels)
  • apt package manager
  • Environment modules, Spack, and Singularity (same as the classic fluid-slurm-gcp)

Slurm+OpenHPC is on GCP Marketplace

3/3/20 - We now have another flavor of fluid-slurm-gcp on GCP Marketplace with pre-installed OpenHPC packages.

In addition to the flexible multi-project/region/zone of "classic" fluid-slurm-gcp, the fluid-slurm-gcp+openhpc solution includes

  • lmod Environment modules
  • GCC 8.2.0 compilers
  • MPI-ready w/ MPICH installed (MVAPICH & OpenMPI available soon)
  • Serial and Parallel IO Libraries (HDF5, NetCDF, Adios)
  • HPC Profilers/Performance Tuning Toolkits (Score-P, Tau, Scalasca)
  • Scientific libraries for HPC ( MFEM, PETSc, Trilinos, and much more!)

Get Started Now with Multi-Project SLURM on GCP Marketplace

Testing of our Multi-Project enhancement of the FLUID-SLURM-GCP marketplace solution has been wrapped up and teams from around the world are already spinning up clusters to run their HPC applications.

2/11/20 Release Notes:

  • Multi-project support - naturally integrates with Fluid Numerics fluid-slurm-gcp terraform scripts
  • Multi-region partition configuration (Globally scalable)
  • Multi-zone partition configuration (High availability configuration)
  • Multi-machine partitions - add as many compute machines to a partition as you need
  • "Name your machines" - choose the name of the machines in your partition, rather than {deployment}-compute-{id}
  • Slurm accounting exposure in cluster-services - manage users and the partitions they have access to with cluster-services

Documentation for the new cluster-config schema can be found on our cluster-config help page.