rosette128px1

Cluster

Processor

Nodes

Cores

DWF Performance
(MFlops/node)
asqtad Performance
(MFlops/node)

qcd

2.8GHz Single CPU Single Core P4E

127

127

1,400

1,017

pion

3.2GHz Single CPU Single Core Pentium 640

486

486

1,729

1,594

kaon

2.0GHz Dual CPU Dual Core Opteron

600

2,400

4,703

3,832

jpsi

2.1GHz Dual CPU Quad Core Opteron

856

6,848

10,061

9,563

ds

2GHz Quad CPU Eight Core Opteron

420

13,440

51,520

50,547

bc

2.8GHz Quad CPU Eight Core Opteron

224

7,168

57,408

56,224

pi0

2.6GHz Dual CPU Eight Core Intel

314

5,024

78,310

61,490

LQ1

2.5GHz Dual CPU 20 Core Intel

183

7,320

370k

280k

The table above shows the measured performance of DWF and asqtad inverters on all the Fermilab LQCD clusters. For qcd and pion, the asqtad numbers were taken on 64-node runs, 14^4 local lattice per node, and the DWF numbers were taken on 64-node runs using Ls=16, averaging the performance of 32x8x8x8 and 32x8x8x12 local lattice runs together. The DWF and asqtad performance figures for kaon use 128-process (32-node) runs, with 4 processes per node, one process per core. The DWF and asqtad performance figures for jpsi use 128-process (16-node) runs, with 8 processes per node, one process per core. The DWF and asqtad performance figures for ds and bc use 128-process (4-node) runs, with 32 processes per node, one process per core.

Details

Visual

 

qcd: 120-node cluster (decommissioned April 2010 ) with single-socket 2.8 GHz Pentium 4 processors and a Myrinet fabric.

item8

 

pion: 486-node cluster (decommissioned April 2010 ) with single-socket 3.2 GHz Pentium 640 processors and SDR Infiniband fabric.

item10

kaon: 600-node cluster (decommissioned August 2013 ) with dual-socket dual-core Opteron 270 (2.0 GHz) processors and a DDR Mellanox Infiniband fabric.

item9

 

jPsi: 856-node cluster (decommissioned May 19 2014 ) with dual-socket quad-core Opteron 2352 (2.1 GHz) processors and a DDR Mellanox Infiniband fabric.

item6

ds: 420-node cluster (224 nodes decommissioned August 2016, 196 nodes decommissioned April 2020 ) with quad-socket eight-core Opteron 6128 (2.0 GHz) processors and a QDR Mellanox Infiniband fabric.

item11

dsg: 76-node cluster (decommissioned April 2020 ) with dual-socket four-core Intel Xeon E5630 processors, two NVIDIA Tesla M2050 GPUs per node and a QDR Mellanox Infiniband fabric.

 

 

bc: 224-node cluster (decommissioned April 2020 ) with quad-socket eight-core Opteron 6320 (2.8 GHz) processors and a QDR Mellanox Infiniband fabric.

item4

 

π: 314-node cluster (decommissioned April 2020 ) with dual-socket eight-core Intel E5-2650v2 "Ivy Bridge" (2.6 GHz) processors and a QDR Mellanox Infiniband fabric.

item5

 

π0g: 32-node cluster (decommissioned April 2020 ) with dual-socket eight-core Intel E5-2650v2 "Ivy Bridge" (2.6 GHz) processors, four NVidia Tesla K40m GPUs per node and a QDR Mellanox Infiniband fabric.

item12

 

LQ1:183-node cluster with dual-socket 20-core Intel 6248 "Cascade Lake" (2.5 GHz) processors and an EDR Omni-Path fabric.

item7

Fermi National Accelerator Laboratory
Managed by Fermi Research Alliance, LLC
for the U.S. Department of Energy Office of Science
item13
Security, Privacy, Legal

 

 

 

 

peaceOpt2 item8 item6 item5 item12 item7