Discovery, Polaris and Andes: High Performance Computing (HPC) quick comparison

Body

Our High Performance Computing (HPC) resources allow researchers to run compute intensive, large memory programs quickly and efficiently; and store data securely and accessibly.

  • Polaris: 
    • easy to log in at the command line and use
    • used for running computationally intensive programs such as Matlab, Stata, Mathematica and statistical applications, or programs that require a lot of  shared memory. It has ~5 TB of local scratch space available, as of January 2024
    • more about requesting an account to get started with Polaris 
  • Andes
    • easy to log in at the command line and use
    • used for running statistical packages and scientific applications which need large amounts of memory and scratch space.  ~5 TB of fast scratch space is available as of January 2024
    • more about requesting an account to get started with Andes 
  • Discovery
    • uses a 'scheduler' program to submit jobs to a queue, rather than interactively (for example, .m Matlab programs can be run, but not from within the Matlab GUI interface) 
    • has features to view and monitor the load on the computational resources environment 
    • Job submissions to Discovery are then channeled through a job scheduler to allow for efficient management and allocation of computational resources.  This scheduler is known as a 'Slurm' scheduler, and uses a batch-scripting language.  This method ensures an equitable distribution of resources and optimal CPU usage. For guidance on scheduling jobs, please refer to the Scheduling Jobs to Run-Slurm Overview  
    • see this page on cost of the community model
    • tutorials and details about Discovery and the Slurm scheduler  and the overall service description

Details

Details

Article ID: 156507
Created
Thu 1/18/24 9:57 AM
Modified
Thu 2/22/24 4:00 PM