Massively Parallel (MP) describes problem spaces where multiple tasks work in lock step to solve the algorithm, and all those tasks are tightly bound with all others, such that the primary bottleneck is the speed and latency of communication between the tasks.
MP Clusters tend to have a high speed low latency network backbone, such as Myrinet or Infiniband (or nowadays 40Gb Ethernet or 100Gb Ethernet). On these systems we typically run a resource manager such as Torque/PBS or SLURM for managing jobs and compute resources, and a scheduler such as Maui or SLURM to handle the complex scheduling characteristics across time and possibly heterogenous compute resources.
OpenMPI is our most popular messaging platform which handles the high performance inter-task communication, and Fermi Scientific Linux is our OS of choice, which is essentially a variant of Red Hat Enterprise Linux. OpenMP has been utilized by some members for jobs which constrain all of their tasks to one compute node.
Our MP Clusters come with their own internal storage, which is only accessible via the internal network. SEAS Home Directories are not mounted locally.
SEAS HPC Clusters
CETS also maintains some resources which are shifting into retirement phases, as more efficient and modern shared systems take their place.