Search 56 Results

Can't Find What You Are Looking For? Sign In Now
You are currently not logged in and some search results only show up if you login.

Tools for Researchers

, data storage, Caligari Web and database servers Accounts for Researchers - For general research computing, data storage, and Discovery cluster accounts Software Resources - Applications in

HPC Scratch Space

many of these volumes are arrays of disks where failure of one disk means the loss of all data in the volume. Characteristics of specific scratch volumes discovery:/dartfs-hpc/scratch

Slurm overview

There are many new Slurm commands available on the Discovery cluster. Common user commands in Slurm include: sbatch sbatch <job script> submit a batch job to the queue

Cluster details

Discovery  is a Linux cluster comprised of: 59 16-Core (2x) Intel nodes (944 cores) 17 40 core (2x) Intel nodes (680 cores) 31 64 core (2x) AMD EPYC nodes (1948 cores

Software for Research Knowledge Base

/ .  To use software on our high-performance computers (Andes, Polaris or Discovery), please visit https://rc.dartmouth.edu/ and click "Request an account" to get started, and for instructions on

Software for Research

/ .  To use software on our high-performance computers (Andes, Polaris or Discovery), please visit https://rc.dartmouth.edu/ and click "Request an account" to get started, and for instructions on

Sample R lab (Hello world)

In this lab we will create a basic R script to print "Hello World!". Then we will use the scheduler to submit the job via sbatch. The first step of this process is to either move your R script

Sample MPI lab (Hello World)

In this lab we will use openMPI to compile a very basic "Hello World!" script, which we will then submit to run across multiple compute nodes. Once you have logged into the discovery cluster. The

Sample python lab (Walltime example)

In this lab we will create a simple python script, called invert_matrix.py which we will submit to the cluster. In addition we will explore what it is like for a job to run out of walltime. For the

Jupyter on a compute node

The following steps demo: Create a job submission script to run a Jupyter notebook on the Discovery cluster Submit the job to the scheduler Create a SSH tunnel to Discovery and browse to

Software Resources for Researchers

Details The following software is available to researchers: Bioinformatics Software Availability OS Comments mrbayes Discovery Linux

Migrating a legacy Discovery home directory to DartFS

Step by step instructions for migrating data from a legacy Discovery home directory to a DartFS home directory. ... Background New home directories for Research Computing servers are in DartFS.  When you have a DartFS home directory you can login to all of our systems (Discovery, Polaris, Andes, etc.) using

Discovery Setup for Mac

Access to Discovery cluster from Macintosh ... discovery-NETID.terminal ... Windows. To display graphical output from remote Linux software, you'll need Xquartz or FastX (see below), but this isn't so important with discovery since it is a batch-scheduled environment

Discovery Cluster

, large memory programs quickly and efficiently; and store data securely and accessibly. Click here for more information about the cluster. Click here to gain information on how to access the Discovery cluster

Investing in Discovery

Overview The DISCOVERY Cluster is an exciting opportunity for researchers to participate in creating a world class super computer devoted to furthering research at Dartmouth. Researchers