Home

Welcome to HPC and Research computing at the School of Engineering and Applied Sciences at the University of Pennsylvania!

CETS provides a large diversity of service offerings under different circumstances. We attempt to simplify and standardize our offerings based on the most practical and efficient services and products available at any time, as long as they meet the needs of our researchers. In general, our HPC researchers’ problem domains fall into two technical areas: Massively Parallel (MP) or Embarrassingly Parallel (EP).

Our research groups utilize faculty-purchased, CETS-managed MP Clusters, EP Grids, or specialized compute resources. Scroll down to see our existing installations.

The former Liniac Project nee Eniac2K has been folded into CETS organizationally.

Usage Policies for these resources include, but are not limited to, those found at the University of Pennsylvania Acceptable Use of Electronic Resources Policy site. The School of Engineering and Applied Sciences, and the faculty who own certain resources, may impose further restrictions or limitations on the acceptable use of some of these resources.

MP Clusters

Massively Parallel (MP) describes problem spaces where multiple tasks work in lock step to solve the algorithm, and all those tasks are tightly bound with all others, such that the primary bottleneck is the speed and latency of communication between the tasks.

MP Clusters tend to have a high speed low latency network backbone, such as Myrinet or Infiniband (or nowadays 40Gb Ethernet or 100Gb Ethernet). On these systems we typically run a resource manager such as Torque/PBS or SLURM for managing jobs and compute resources, and a scheduler such as Maui or SLURM to handle the complex scheduling characteristics across time and possibly heterogenous compute resources.

OpenMPI is our most popular messaging platform which handles the high performance inter-task communication, and Fermi Scientific Linux is our OS of choice, which is essentially a variant of Red Hat Enterprise Linux. OpenMP has been utilized by some members for jobs which constrain all of their tasks to one compute node.

Our MP Clusters come with their own internal storage, which is only accessible via the internal network. SEAS Home Directories are not mounted locally.

How to use MP Clusters (SLURM)

How to use MP Clusters (Torque/Maui)

SEAS HPC Clusters

CETS also maintains some resources which are shifting into retirement phases, as more efficient and modern shared systems take their place.

 

EP Grids

Embarrassingly Parallel (EP) problem spaces are typified by discrete subparts which can individually be solved efficiently without cross communication, possibly with some pre-processing and post-processing steps to coordinate, but mainly lots of independent members doing their own thing.

EP Grids tend to be simpler and less expensive, and given the problem domains, we have typically aimed at lower memory and faster local disk targets. However, since these are shared resources, higher memory targets are increasing in popularity, and larger local disks as well. In this space, we have stuck with some descendant of Sun Grid Engine, subsequently using Open Grid Scheduler, and now Son of Grid Engine. SGE nee OGS nee SoGE handles job and resource management as well as scheduling.

Our EP Grids mount SEAS Home Directories, and typically do not contain their own separate storage. In our standard configuration, all compute nodes are directly externally accessible.

How to use EP Grids

How to use Linux Servers

SEAS HPC Grids