Welcome to Fluid Numerics

Research Focused. Team Oriented. User Centered. Globally Scaled.

2019-nCoV & SARS-CoV-2: Fluid Response

We are available to respond rapidly to research team needs for metagenomic and viral genomics study related to Coronavirus COVID-19. We are developing and deploying resources to enable rapid deployment of a SLURM cluster able to connect to public datasets and process at global scale. Google has also announced resources are available to the biomedical community for research.

Cyberinfrastructure Stimulus Package

Structured in order to assist with team collaboration in a time in need and beyond. We are offering GSuite rates in order to help every one.

Free Live Remote Training & Tutorials

Fluid Numerics is creating live training tutorials for Fluid-Slurm-GCP, HIP-Fortran, and other specialty topics that will be broadcast over Google Meets. Browse our current listing and register for any of the training or tutorial sessions.

Version 2.3.0 Fluid-Slurm-GCP is Released!

4/1/2020

  • (feature upgrade) GCP Marketplace solutions now come with read-write access scopes to GCS storage
  • (bugfix) Resolved issue on compute nodes with hyperthreading disabled causing incorrect core-count configuration in slurm.conf
  • python/2.7.1 and python/3.8.0 are now available under /apps and through environment modules.

The cluster-services CLI has been updated with the Version 2.3.0 release of fluid-slurm-gcp. Updates include

  • Updated help documentation
  • The default_partition item has been added to the cluster-config schema which allows users to specify a default Slurm partition.
  • --preview flag for all update commands allows you to preview the changes to your cluster prior to actually making the changes
  • cluster-services add user --name flag removed. Individual users can be added to the default slurm account using cluster-services add user <name>
  • User's can now obtain template cluster-config blocks using cluster-services sample all/mounts/partitions/slurm_accounts
  • User provided cluster-configs are now validated against /apps/cls/etc/cluster-config.schema.json
  • Added cluster-services logging to /apps/cls/log/cluster-services.log
  • Fixed incorrect core count bug with the partitions[].machines[].enable-hyperthreading flag
  • Removed add/remove mounts/partitions options; mounts and partitions are now updated by using update all, update mounts, and/or update partitions calls.
  • add/remove user call only adds or removes a user to the default Slurm account. These calls are strictly convenience calls.
  • cluster-config schema now specified compute, controller, and login images in compute_image, controller_image, and login_image rather than in the partitions.machines, controller, and login list-objects.

Slurm-GCP-Ubuntu is on GCP Marketplace

3/11/20 - Fluid Numerics has released another flavor of fluid-slurm-gcp on GCP Marketplace that is based on the Ubuntu operating system!

In addition to the flexible multi-project/region/zone of "classic" fluid-slurm-gcp, the fluid-slurm-gcp-ubuntu solution includes

  • Ubuntu 19.10 Operating System
  • zfs-utils for ZFS filesystem management (but no Lustre kernels)
  • apt package manager
  • Environment modules, Spack, and Singularity (same as the classic fluid-slurm-gcp)

Slurm+OpenHPC is on GCP Marketplace

3/3/20 - We now have another flavor of fluid-slurm-gcp on GCP Marketplace with pre-installed OpenHPC packages.

In addition to the flexible multi-project/region/zone of "classic" fluid-slurm-gcp, the fluid-slurm-gcp+openhpc solution includes

  • lmod Environment modules
  • GCC 8.2.0 compilers
  • MPI-ready w/ MPICH installed (MVAPICH & OpenMPI available soon)
  • Serial and Parallel IO Libraries (HDF5, NetCDF, Adios)
  • HPC Profilers/Performance Tuning Toolkits (Score-P, Tau, Scalasca)
  • Scientific libraries for HPC ( MFEM, PETSc, Trilinos, and much more!)

Multi-Project Slurm is on GCP Marketplace

2/11/20 - We have concluded testing of our Multi-Project enhancement on our FLUID-SLURM-GCP marketplace solution.

Our updates have been pushed to marketplace for fluid-slurm-gcp. These updates include

  • Multi-project support - naturally integrates with Fluid Numerics fluid-slurm-gcp terraform scripts
  • Multi-region partition configuration (Globally scalable)
  • Multi-zone partition configuration (High availability configuration)
  • Multi-machine partitions - add as many compute machines to a partition as you need
  • "Name your machines" - choose the name of the machines in your partition, rather than {deployment}-compute-{id}
  • Slurm accounting exposure in cluster-services - manage users and the partitions they have access to with cluster-services

Documentation for the new cluster-config schema can be found on our cluster-config help page.

We are a Google Cloud Partner

Please get in touch regarding HPC or Scientific Computing on Google Cloud Platform. Our team can help you migrate onto the cloud or leverage the cloud to extend your resources to fit growing and changing needs.

Fluid Numerics is a Google Cloud Partner

We have internal conversations in the office and within ourselves every single day that we would like to express to the world. In an effort to engage with the scientific community it is currently an unwritten workflow that you complete your study, generate a paper, find a journal, and submit the publication for peer review which ultimately could be released to the community at large. Throughout our experience with this workflow we have uncovered a need to maintain public exposure to encourage a public forum prior to if ever releasing to a journal of science.

The opportunity to make progress in science will be accelerated greatly if we are able to collaborate and iterate continuously. We don't want to send you periodicals, we want to engage your imagination and possibly inspire action or aspiration. Please subscribe to Fluid Numerics: The Journal to receive updates on us as a team and our efforts within our domains.