Welcome to HPC and Research computing at the School of Engineering and Applied Sciences at the University of Pennsylvania!

CETS provides a large diversity of service offerings under different circumstances. We attempt to simplify and standardize our offerings based on the most practical and efficient services and products available at any time, as long as they meet the needs of our researchers. In general, our HPC researchers’ problem domains fall into two technical areas: Massively Parallel (MP) or Embarrassingly Parallel (EP).

Our research groups utilize faculty-purchased, CETS-managed MP Clusters, EP Grids, or specialized compute resources. Scroll down to see our existing installations.

The former Liniac Project nee Eniac2K has been folded into CETS organizationally.

Usage Policies for these resources include, but are not limited to, those found at the University of Pennsylvania Acceptable Use of Electronic Resources Policy site. The School of Engineering and Applied Sciences, and the faculty who own certain resources, may impose further restrictions or limitations on the acceptable use of some of these resources.

MP Clusters

Massively Parallel (MP) describes problem spaces where multiple tasks work in lock step to solve the algorithm, and all those tasks are tightly bound with all others, such that the primary bottleneck is the speed and latency of communication between the tasks.

MP Clusters tend to have a high speed low latency network backbone (we currently install Infiniband). On these clusters we run SLURM, a combined resource manager and scheduler for managing jobs and compute resources and scheduling resources across time, available resources, and resource characteristics such as available memory, CPU cores, and/or GPU cores.

OpenMPI is a high-performance inter-process messaging platform, and we currently support Fermi Scientific Linux1 which is essentially a debranded variant of Red Hat Enterprise Linux. OpenMP has been utilized by some members for jobs which constrain all of their tasks to one compute node.

Our MP Clusters come with their own internal storage, which is only accessible via the internal network. SEAS Home Directories are not mounted locally, but you can use rsync/scp to transfer files for backup, visualization, or reporting purposes.

How to use MP Clusters (SLURM)

SEAS HPC Clusters

CETS also maintains some resources which are shifting into retirement phases, as more efficient and modern shared systems take their place.

1Fermi Scientific Linux 7 is supported until RHEL 7’s end of life in 2024. We are actively monitoring the status of the Rocky Linux Project with respect to forthcoming installations of a RHEL 8 variant.

EP Grids

Embarrassingly Parallel (EP) problem spaces are typified by discrete subparts which can individually be solved efficiently without cross communication, possibly with some pre-processing and post-processing steps to coordinate, but mainly lots of independent members doing their own thing.

EP Grids tend to be simpler and less expensive, and given the problem domains, we have typically aimed at lower memory and faster local disk targets. However, since these are shared resources, higher memory targets are increasing in popularity, and larger local disks as well. In this space, we have stuck with some descendant of Sun Grid Engine, subsequently using Open Grid Scheduler, and now Son of Grid Engine. SGE nee OGS nee SoGE handles job and resource management as well as scheduling.

Our EP Grids mount SEAS Home Directories, and typically do not contain their own separate storage. In our standard configuration, all compute nodes are directly externally accessible.

How to use EP Grids

How to use Linux Servers