Discovery is a Linux cluster comprised of:
-
- 59 16-Core (2x) Intel nodes (944 cores)
- 17 40 core (2x) Intel nodes (680 cores)
- 31 64 core (2x) AMD EPYC nodes (1948 cores)
In aggregate the cluster has 3492 cores, 16.5TB of memory, and more than 2.8 PB of disk space.
Node Hardware Breakdown
Cell |
Vendor |
CPU |
Cores |
Ram |
Disk |
Scratch |
Nodes |
g |
HPE |
Intel Xeon E5-2640V3 (2.6GHz) K80 X24 |
16 |
128 |
1TB |
820GB |
g01-g11 |
p |
DELL |
Intel(R) Xeon(R) Gold 6248 CPU (2.50GHz) |
40 |
565GB |
1.7TB |
1.5 |
p01-p04 |
q |
HPE |
AMD EPYC 7532 (2.4ghz) |
64 |
512GB |
1TB |
820GB |
q01-q10 |
j |
EXXACT |
AMD EPYC 7543 (2.80GHz) |
64 |
512GB |
512GB |
290GB |
r01-r21 |
k |
HPE |
Intel Xeon E5-2640V3 (2.6GHz) |
16 |
64GB |
1TB |
820GB |
k25-k58 |
m |
HPE |
Intel Xeon E5-2643V4 (3.2GHz) |
16 |
128GB |
1TB |
820GB |
m01-m20 |
n |
HPE |
Intel Xeon Gold 6148 (2.40GHz) |
40 |
384GB |
1TB |
820GB |
n01-n13 |
Discovery offers researchers the ability to have specialized heads nodes available inside the cluster for dedicated compute. These nodes can come equipped with up to 64 compute cores and 1.5TB of memory
Specialized Compute Nodes:
Operating System :
- CentOS 7 is used on Discovery, its supporting head-nodes and compute nodes.
Node Names :
-
- The compute nodes for queued jobs will be managed via the scheduler.
- GPU compute nodes are only available to members via the gpuq queue.
- Interactive node are named x01 and are available for testing your programs interactively prior to submitting them to the queue to be run on the main cluster.
Node Interconnects
-
- All of the compute nodes are connected via 10GB Ethernet. The cluster itself is connected to Dartmouth’s Science DMZ; facilitating faster data transfer and stronger security