Search 56 Results
Can't Find What You Are Looking For?
Sign In Now
You are currently not logged in and some search results only show up if you login.
You are currently not logged in and some search results only show up if you login.
Investing in Discovery
Overview
The DISCOVERY Cluster is an exciting opportunity for researchers to participate in creating a world class super computer devoted to furthering research at Dartmouth.
Researchers
Discovery Cluster
, large memory programs quickly and efficiently; and store data securely and accessibly.
Click here for more information about the cluster.
Click here to gain information on how to access the Discovery cluster
Discovery Cluster details
Discovery is a Linux cluster that in aggregate contains 128 nodes, 6296 CPU cores, 54.7TB of memory, and more than 2.8 PB of disk space.
Node Hardware Breakdown
Cell
Vendor
Discovery Setup for Mac
Access to Discovery cluster from Macintosh ... discovery-NETID.terminal ... Windows.
To display graphical output from remote Linux software, you'll need Xquartz or FastX (see below), but this isn't so important with discovery since it is a batch-scheduled environment
Migrating a legacy Discovery home directory to DartFS
Step by step instructions for migrating data from a legacy Discovery home directory to a DartFS home directory. ... Background
New home directories for Research Computing servers are in DartFS. When you have a DartFS home directory you can login to all of our systems (Discovery, Polaris, Andes, etc.) using
Software Resources for Researchers
Details
The following software is available to researchers:
Bioinformatics
Software
Availability
OS
Comments
mrbayes
Discovery
Linux
Jupyter on a compute node
The following steps demo:
Create a job submission script to run a Jupyter notebook on the Discovery cluster
Submit the job to the scheduler
Create a SSH tunnel to Discovery and browse to
Sample python lab (Walltime example)
In this lab we will create a simple python script, called invert_matrix.py which we will submit to the cluster. In addition we will explore what it is like for a job to run out of walltime.
For the
Sample MPI lab (Hello World)
In this lab we will use openMPI to compile a very basic "Hello World!" script, which we will then submit to run across multiple compute nodes.
Once you have logged into the discovery cluster. The
Sample R lab (Hello world)
In this lab we will create a basic R script to print "Hello World!". Then we will use the scheduler to submit the job via sbatch.
The first step of this process is to either move your R script
Software for Research
/ .
To use software on our high-performance computers (Andes, Polaris or Discovery), please visit https://rc.dartmouth.edu/ and click "Request an account" to get started, and for instructions on
Slurm overview
There are many new Slurm commands available on the Discovery cluster.
Common user commands in Slurm include:
sbatch
sbatch <job script>
submit a batch job to the queue
Software for Research Knowledge Base
/ .
To use software on our high-performance computers (Andes, Polaris or Discovery), please visit https://rc.dartmouth.edu/ and click "Request an account" to get started, and for instructions on
HPC Scratch Space
many of these volumes are arrays of disks where failure of one disk means the loss of all data in the volume.
Characteristics of specific scratch volumes
discovery:/dartfs-hpc/scratch
Tools for Researchers
, data storage, Caligari Web and database servers
Accounts for Researchers - For general research computing, data storage, and Discovery cluster accounts
Software Resources - Applications in