A cool new Auto-Scaling feature is now available in preview on HPCBOX. The Intelligent AutoScaler, built into the HPCBOX platform, automatically starts required number of Compute, GPU and CUDA workers suitable for a particular user job. Furthermore, the AutoScaler can automatically identify idle workers and power them off when there are no user jobs waiting to be executed on the HPCBOX cluster.
The HPCBOX AutoScaler is designed to require almost zero configuration from the administrator for taking advantage of auto scaling (no configuration required with set up of special host groups, scale sets etc.) and be cloud vendor agnostic, meaning, when HPCBOX is available on other cloud platforms like AWS or GCP, autoscaling should work the same way as it does on Microsoft Azure which is the current preferred platform for HPCBOX.
HPCBOX has two modes of operation, a cluster can either be used for Personal Supercomputing, meaning, a cluster is for dedicated use of one user, or, a cluster can be in a Multi-User configuration which is more of a traditional set up with multiple users sharing a cluster, running different applications, distributed parallel, GPU accelerated and those that are used for visualization on workers which have a OpenGL capable GPU.
Dedicated single user clusters do not generally require any kind of autoscaling functionality because a user operates it like their PC/laptop and has complete control over its operation. Multi-User setups, however, can involve more complexity, especially when:
Let us use a use-case to understand how the Intelligent AutoScaler in HPCBOX handles optimization of resources and budgets on the cluster.
The following picture represents an HPCBOX Cluster which is a combination of both reserved (resources with a usage commitment, on Azure called Reserved Instances) and pay-as-you-go resources. To optimize the budget spend in such a configuration, one would want the reserved instances to be always powered-on to provide a baseline capacity for the cluster and automate the use of pay-as-you-go resources to minimize resource wastage. Furthermore, we could assume that the compute workers on this cluster are a combination of different hardware configurations, for example, on Azure, we could assume a combination of HB120rs_V2 and HB120-16rs_v3 (combination of AMD EPYC “Rome” and “Milan” hardware).
Reserved Instances -> 3 HB120rs_V2
Pay-as-you-go -> 1 HB120rs_V2 and 2 HB120-16rs_v3
When a job which can be satisfied by the baseline resources comes into the system, the AutoScaler does nothing and lets the job get scheduled on the available compute workers.
When new jobs come into the system, the AutoScaler gets into action and matches the jobs to the most suitable hardware, intelligently calculates the required number of workers which would satisfy the job and powers them on with no admin/user interaction. For example, we see in the image below that two jobs have entered the system, each suitable for different hardware configurations, AMD Epyc “Rome” powered HB120rs_V2 and AMD Epyc “Milan” powered 16 core HB120-16rs_v3.
When jobs exit the system, the AutoScaler automatically identifies the idle workers and powers them off while maintaining the baseline configuration of the cluster.
Visualization is important for users to know when their jobs might start. HPCBOX includes a new AutoScaler event monitoring stream which automatically gets updated with every action the AutoScaler is currently taking and will be taking in the next iterations.
HPCBOX AutoScaler is now available in preview and we would be very pleased to run a demo or perform a POC or pilot with you to optimize your cloud spend on HPC resources while making sure your jobs are always matched to the most suitable hardware. Schedule a meeting here.
Founder and CTO, Drizti Inc
All third-party product and company names are trademarks or registered trademarks of their respective holders. Use of them does not imply any affiliation or endorsement by them.
This is one of the most exciting posts I’ve written in my 15+ years of existence in the High Performance Computing (HPC) industry.
AMD EPYC Zen 3 “Milan” is integrated, tested and available on the HPCBOX platform on Microsoft Azure, on LAUNCH DAY!
This is something which seldom happens in the HPC world, very few users ever get to start production on day one of the release of a new and very impressive processor generation upgrade.
I will not be doing any technical comparison between Zen 3 and Zen 2 and neither will I present any specific application benchmark numbers in this post. I am sure there’ll be many posts by AMD, Microsoft and other ISVs who’ll be publicly sharing information on the performance boost they see for their codes with EPYC Zen 3 “Milan”. This post will be more about HPCBOX and how we could deliver this upgrade experience for our users, on launch day and without having to set our hands on the physical hardware or physically be present in a datacenter(s) (actually multiple regions)! Awesome work and support from the Azure team!
Drizti was a launch partner for the new HBv3 instance size on Microsoft Azure and these instances are powered by the new EPYC 7xx3/Milan CPUs, HDR InfiniBand and sport very impressive dual NVMe drives which give a big performance boost to applications which use local scratch, specifically when they are striped.
Read more about HBv3 here
Unbelievably quick CPU generation upgrade
It all started with Drizti getting access to 1000+ cores of HBv3 instances for functional and compatibility testing, to make sure we are ready for GA availability of HBv3 on launch day of AMD EPYC ”Milan” CPUs. We went through testing, adding necessary support within the HPCBOX platform to make sure we are able to use the cool new features available on the instances, like automate striping of NVMe, testing out MPI compatibility, testing the workflow component of HPCBOX, auto-scaling/shutdown/start etc. and all other functions which are offered by the HPCBOX platform. In addition to this, we also did some performance tests with different ISV and open-source codes, mainly to make sure the HPCBOX workflow engine can correctly handle the new instances and automatically optimize application pipelines to take advantage of the new hardware.
Some of the applications we tested were ANSYS CFX, ANSYS Fluent, OpenFOAM. At a high level, we can share that we are seeing impressive performance benefits of using EPYC “Milan”, particularly for large runs. Also, for applications which are local scratch bound, we expect users to get a good boost due to the possibility of having striped NVMes in HBv3.
We also did a test to see how easy it would be to upgrade HPC for our users and were really impressed that we could just switch our users from HBv2 to HBv3, meaning from “Rome” to “Milan” and to upgraded local scratch in under 30 minutes! This is mainly because of the design of the HPCBOX platform, it is a self-contained platform, and this gives us the ability to fine tune platform capabilities quickly without depending on external services to get upgraded first.
Agile and Impressive!