USQCD Collaboration, April 4-5, 2008
Brookhaven LQCD Facility     Fermilab LQCD Facility     JLab LQCD Facility


All Hands' Home

JLab USQCD 2008 Site

USQCD Home


Call for Proposals & Addendum

Agenda

Proposals


DOE INCITE

Cluster performance


Banquet


2007 Meeting

2006 Meeting

2005 Meeting


Call for Proposals

Date: April 25, 2008 12:18:21 PM CDT
To:   USQCD Collaboration Members
From: USQCD Scientific Program Committee -- Andreas Kronfeld (Chair),
        Tom Blum, Chris Dawson, Colin Morningstar, Frithjof Karsch,
	John Negele, Junko Shigemitsu

Dear Colleagues,

As this year's allocation process ensued, the Scientific Program
Committee, in consultation with the Exectuive Committee found it
prudent to consider USQCD's INCITE and ESP (Early Science Program)
resources at the same time as the USQCD's dedicated hardware.  This
introduces some uncertainty, because the USQCD allocation runs from
July 1, 2008, until June 30, 2009.  However, we will not know how much
time USQCD will receive for CY 2009 until late 2008 (at the earliest).

The computers in question are:

    Cray XT4 "Jaguar" installed at ORNL
    CY 2008 INCITE allocation: 7,100,000 core-hours
	2.6 GHz Dual Core Opteron, being upgraded to quad core
	1 XT4 core-hour = 0.58 6n-equivalent node-hour

    IBM BlueGene/P "Intrepid" installed at ANL
    CY 2008 INCITE allocation: 19,600,000 core-hours
	(on 8-rack partition formerly known as Endeavour)
    2008-2009 ESP allocation: 25,000,000 core-hours, or more (?)
	(on 32-rack partition)
	850 MHz Quad Core PowerPC, 1024 nodes/rack
	1 BG/P core-hour = 0.27 6n-equivalent node-hour

We estimated the available resources on the leadership-class machines
as follows.  Unfortunately, each machine requires a somewhat different
treatment.  For the XT4 we take half of the CY08 INCITE allocation
plus half of the CY09 INCITE allocation as the resource available
during USQCD's allocation year.  The CY09 part must be estimated,
and we (conservatively) assume that it will be the same.  Thus, we
allocate 7.1 M XT4 core-hours.  We use the average of DWF and clover
inverters on the XT4 and the 6n cluster to obtain 7.1 XT4 core-hours =
4.188 6n node-hours.  (A benchmark for asqtad on XT4 is not in hand.)

For the 8-rack BG/P, the INCITE award has only recently become
available---the nominal start of January 1 was delayed until March 31.
The SPC has not yet advised the Executive Committee how to allocate
this resource among projects, so we have included the whole CY08 BG/P
INCITE award plus an estimate of the first 6 month's worth of CY09.
Here we assumed the same monthly allocation, i.e., 6/9 of the CY08
award.  The CY08 award is 19.6 M BG/P core-hours; multiplying by
15/9 we arrive at 32.67 M BG/P core-hours.  Using the average of DWF
and asqtad inverters, we take as equivalent 32.67 BG/P core-hours =
8.82 6n node-hours.

The ESP award on the 32-rack BG/P has also only recently become
available and again the SPC has not yet advised the Exectuive Committee
how it should be allocated among USQCD projects.  The ESP is likely to
run from now until sometime before the end of our allocation year.
Based on e-mail exchanges between Bob Sugar and ANL management,
we expect 50-70 % of USQCD's request of 50 M BG/P core-hours.
For allocation purposes we took the lower estimate, namely 25 M BG/P
core-hours = 6.75 M 6n-equivalent node-hours.

Taking the QCDOC, clusters, the XT4, and both BlueGenes into account,
we have decided to allocate 57.88 M 6n-equivalent node-hours at this
time.

We note that the way we decided to treat the leadership-class machines
is not made explicit in the Call for Proposals, but it is consistent
with the stated policies, and consistent with our treatment of the
INCITE awards from CY07 and CY08.  We also note that our estimates
for the CY09 INCITE awards are probably low.  We anticipate that we
will have difficulty balancing projects among the platforms, unless
more projects are able to run on the leadership-class machines.

The PIs of several projects have worked with the SPC over the past few
days to help ensure that we can deploy everyone in a reasonable way.
The SPC thanks them for their flexibility.

Back to Top


Andreas Kronfeld Legal Notices