Massively Parallel (MP) describes problem spaces where multiple tasks work in lock step to solve the algorithm, and all those tasks are tightly bound with all others, such that the primary bottleneck is the speed and latency of communication between the tasks.
MP Clusters tend to have a high speed low latency network backbone (we currently install Infiniband). On these clusters we run SLURM, a combined resource manager and scheduler for managing jobs and compute resources and scheduling resources across time, available resources, and resource characteristics such as available memory, CPU cores, and/or GPU cores.
OpenMPI is a high-performance inter-process messaging platform, and we currently support Fermi Scientific Linux1 which is essentially a debranded variant of Red Hat Enterprise Linux. OpenMP has been utilized by some members for jobs which constrain all of their tasks to one compute node.
Our MP Clusters come with their own internal storage, which is only accessible via the internal network. SEAS Home Directories are not mounted locally, but you can use rsync/scp to transfer files for backup, visualization, or reporting purposes.
SEAS HPC Clusters
CETS also maintains some resources which are shifting into retirement phases, as more efficient and modern shared systems take their place.
1Fermi Scientific Linux 7 is supported until RHEL 7’s end of life in 2024. We are actively monitoring the status of the Rocky Linux Project with respect to forthcoming installations of a RHEL 8 variant.