Search 71 Results

Can't Find What You Are Looking For? Sign In Now
You are currently not logged in and some search results only show up if you login.

Slurm partition overview

Discovery Slurm Partitions Overview: Partition Time Limit Description Nodes standard 30-days Default partition for general use. k24-k55, m01-m20

Slurm Coordinator Role for Managing Users in an Account

Overview: The slurm Coordinator role enables administrators to efficiently manage users within an account by granting permissions to add or remove users. This KBA provides step-by-step instructions on

Slurm overview

There are many new Slurm commands available on the Discovery cluster. Common user commands in Slurm include: sbatch sbatch <job script> submit a batch job to the queue

Create a container image for HPC

fibonacci.tar Transfer the image to Discovery scp fibonacci.tar netid@discovery: Build an Apptainer image from the Docker image ssh netid@discovery apptainer build fibonacci docker

Tools for Researchers

, data storage, Caligari Web and database servers Accounts for Researchers - For general research computing, data storage, and Discovery cluster accounts Software Resources - Applications in

When do I need to use the VPN client?

solution has enough capacity by only using it when you need to. Requires VPN DartFS-Networked Attached Storage used in labs, HPC and web applications Discovery (HPC) MyFiles/OurFiles (File

Leaving Dartmouth - Research Computing resources

When you leave Dartmouth you will no longer be able to login to the multi-user Linux systems such as Polaris, Andes, and the Discovery cluster.    You will also lose access to DartFS network

MobaXterm

settings when you ssh to a server.. To run an app on Discovery that will appear on your local screen: Click the "Session" button to start a new session Click SSH for the session type Enter

Graduating Students - Research Computing Resources

Discovery cluster You will also lose access to DartFS network storage. Deadline Approximately 60 days after graduating, any DartFS storage that is in your name will be deleted. After 60

Scheduling Jobs

The Batch System The batch system used on Discovery is Slurm. Users login, via ssh,  to one of the submit nodes and submit jobs to be run on the compute nodes by writing a script file that

GPU Job Example

#!/bin/bash # Name of the job #SBATCH --job-name=gpu_job # Number of compute nodes #SBATCH --nodes=1 # Number of cores, in this case one #SBATCH --ntasks-per-node=1 # Request the GPU

Multi-Core Job Example

Below is an example script which will submit for 4 cores on a single compute node. Feel free to copy and paste it as a job template.   #!/bin/bash #SBATCH --job-name=multicore_job #SBATCH

Single-Core Job Example

sample script are exampled in a comment line that precedes the directive. The full list of available directives is explained in the man page for the sbatch command which is available on discovery

Job array example

JOB ARRAYS Job arrays are multiple jobs to be executed with identical or related parameters. Job arrays are submitted with -a <indices> or --array=<indices>. The indices specification identifies

Research Computing Bill of Rights

Performance Computing (HPC), GIS, data visualization, statistical analysis, software development and more access to HPC resources (Discovery, Andes, and Polaris) DartFS: personal and shared data storage