Discovery, Polaris and Andes: High Performance Computing (HPC) quick comparison

Dartmouth College provides robust High Performance Computing (HPC) resources designed to support researchers in running compute-intensive applications efficiently. Our HPC cluster comprises three primary systems: Polaris, Andes, and Discovery. Each system is tailored to address specific computational needs, offering substantial resources in terms of memory and scratch space to facilitate high-demand research tasks.

With over 3,000 CPU cores, 120,000 GPU cores, more than 12 TB of memory, and approximately 3.2 PB of storage across all systems, our High Performance Computing (HPC) resources empower researchers to run compute-intensive, large-memory programs quickly and efficiently, while securely and accessibly storing their data.

Overview of HPC Systems:

Andes:

  • Configuration : 128-CPU Cores, 64-bit platform with 1.5 TB memory.
  • Purpose : Designed for running statistical packages and scientific applications that demand large amounts of memory and scratch space.
  • Scratch Space : 5 TB of fast local scratch space available.
  • Authentication : Accesses via NetID, with home and lab shared directories on DartFS.
  • Account Availability : Available for all faculty, staff, and graduate students, plus sponsored accounts upon request.
  • Access Method : Remote login via SSH

Polaris: 

  • Configuration : 192 CPU cores, 1. 5TB memory
  • Purpose : Primarily used for computationally intensive programs such as Mathematica, MATLAB, and statistical applications requiring extensive shared memory.
  • Scratch Space : 5 TB of fast local scratch space available.
  • Authentication : Accesses via NetID, with home and lab shared directories on DartFS.
  • Account Availability : Open to all faculty, staff, and students, with sponsored accounts available upon request.
  • Access Method : Remote login via SSH.

Discovery:

  • Job Management : Uses a 'Slurm' scheduler for job queuing, ensuring efficient resource allocation and uses a batch-scripting language. This method ensures an equitable distribution of resources and optimal CPU usage.
  • Monitoring Features : Provides tools for users to monitor resource load and utilization.
  • Resource Distribution : Designed for optimal CPU usage and equitable distribution through job scheduling.
  • Cost Model : Information on the community model costs can be found on the relevant webpage here
  • Tutorials and Documentation : Comprehensive resources regarding job scheduling, Discovery operations, and the Slurm scheduler are available here

Need a HPC account, you can request by being on the Dartmouth network here

If you require software that is not currently available, please feel free to submit a request. You can email us at Research.Computing@dartmouth.edu for any additional application software needs