Discovery Cluster details

Discovery is a Linux cluster that in aggregate contains 128 nodes, 6296 CPU cores, 54.7TB of memory, and more than 2.8 PB of disk space.

Node Hardware Breakdown

Cell Vendor CPU Cores Ram GPU Scratch Nodes
a Dell AMD EPYC 75F3 (2.95GHz) 64 1TB Ampere A100 5.9TB a01-a05
g HPE Intel Xeon E5-2640V3 (2.6GHz) 16 128GB Tesla K80 820GB g01-g11
p Dell Intel Xeon Gold 6248 (2.50GHz) 40 565GB Tesla V100 1.5TB p01-p04
q HPE AMD EPYC 7532 (2.4GHz) 64 512GB None 820GB q01-q10
m HPE Intel Xeon E5-2643V4 (3.2GHz) 16 128GB None 820GB m01-m20
n HPE Intel Xeon Gold 6148 (2.40GHz) 40 384GB None 820GB n01-n13
r EXXACT AMD EPYC 7543 (2.80GHz) 64 512GB None 290GB r01-r21
s Dell AMD EPYC 7543 (2.80GHz) 64 512GB None 718GB s01-s44
Discovery offers researchers the ability to have specialized heads nodes available inside the cluster for dedicated compute. These nodes can come equipped with up to 64 compute cores and 1.5TB of memory Specialized Compute Nodes:

Operating System :

  • RHEL 8 is used on Discovery, its supporting head-nodes and compute nodes.

Node Names :

    • The compute nodes for queued jobs will be managed via the scheduler.
    • GPU compute nodes are only available to members via the gpuq queue.
  • Interactive node are named x01 and are available for testing your programs interactively prior to submitting them to the queue to be run on the main cluster.

Node Interconnects

    • All of the compute nodes are connected via 10GB Ethernet. The cluster itself is connected to Dartmouth’s Science DMZ; facilitating faster data transfer and stronger security