All Hands' Meeting 2009
USQCD Collaboration Meeting
Fermi National Accelerator Laboratory
May 14–15, 2009
Fermilab   Theoretical Physics Department   Computing Division   Fermilab LQCD Facility


All Hands' Home

Registration

List of registrants

USQCD Home


Call for Proposals

Agenda

Proposals


Machine performance

DOE INCITE

Argonne ALCF

Oak Ridge NCCS


Banquet

Hotels

Visiting Fermilab


2008 Meeting

2007 Meeting

2006 Meeting

2005 Meeting


Call for Proposals

Date: 	February 5, 2009 11:13:28 AM CST
To: All members of the USQCD Collaboration.
From: The Scientific Program Committee.
Re: Call for proposals for supercomputer time administered by USQCD

Dear Colleagues,

This message is a Call for Proposals for awards of time on the USQCD
computer resources dedicated to lattice QCD (and other lattice field
theories).  These are the DOE QCDOC at BNL, clusters at Fermilab and
JLab, and awards to USQCD from the INCITE program.

We expect to distribute about 77 M 6n-equivalent node-hours in this
call for proposals.  This includes about 2 M 6n-equivalent node-hours
which we expect to charge for disc and tape usage.

Further remarks on the nature of the INCITE award are given below
in section (iv).

All members of the USQCD Collaboration are eligible to submit proposals.
Those interested in joining the Collaboration should contact Paul
Mackenzie (mackenzie@fnal.gov).

Let us begin with some important dates:

       February  5: this Call for Proposals
       March    20: proposals due
       April    26: reports to proponents sent out
       May   14-15: All Hands' Meeting at Fermilab
       June      1: allocations announced
       July      1: new allocations start

Proponents are invited to make an oral presentation of their proposals
at the All Hands' Meeting.  These presentations are not mandatory, but
are recommended for large projects and in those cases where the report
raises serious issues.

The web site for the All Hands' Meeting is

           http://www.usqcd.org/meetings/allHands2009/

The requests can be of three types:

   A) requests for large amounts of supercomputer time---more than
   500,000 6n-equivalent node-hours---to support calculations of
   benefit for the whole USQCD Collaboration;

   B) requests for medium amounts of supercomputer time---500,000
   6n-equivalent node-hours or less---to support calculations that
   are scientifically sound but not necessarily of such broad benefit;

   C) requests for exploratory calculations, such as those needed to
   develop and/or benchmark code, acquire expertise on the use of the
   machines, or to perform investigations of limited scope.

Requests of Type A and B must be made in writing to the Scientific
Program Committee and are subject to the policies spelled out below.
These proposals must also specify the amount of disk and tape
storage needed.  Projects will be charged for new disks and tapes.
How this will be implemented is discussed in section (iii).

Requests of Type C should be made in an e-mail message to

   Bob Mawhinney (rdm@phys.columbia.edu) for QCDOCs at BNL,

   Paul Mackenzie (mackenzie@fnal.gov) for clusters at FNAL,

   Chip Watson (Chip.Watson@jlab.org) for clusters at JLAB.

Type C requests for QCDOC can be for access to either a 64-node, single
motherboard partition or a larger partition; the total request should
be less than ten thousand node-hours.  Type C requests for clusters
should not exceed 20,000 6n-equivalent node-hours.  The requests will
be honored up to a total not exceeding 5% of the available time on
USQCD hardware.  If the demand exceeds such limits, the Scientific
Program Committee will reconsider the procedures for access.

Collaboration members who wish to perform calculations on USQCD
hardware or on resources awarded to USQCD through the INCITE program
can present requests according to procedures specified below.
The Scientific Program Committee would like to handle requests and
awards either in QCDOC node-hours (for QCDOC) or in the equivalent
node-hours for the JLab "6n" cluster (for all other computers).
Conversion factors for all machines are given below.  When making
requests, please keep in mind that the total computing resource on
USQCD hardware is around 14 teraflop/s-yr, corresponding to about
4 teraflop/s-yr or 10.8 M  6n-equivalent node-hours node-hours on
QCDOC, and about 10 teraflop/s-yr or 35.0 M 6n-equivalent node-hours
on clusters.

The time awarded to USQCD in CY2009 on leadership-class computers at
Argonne (BG/P) and Oak-Ridge (XT4) corresponds to 67M core-hours on
BG/P (16.75M node-hours) and 20M core-hours on XT4 (10M node-hours).
Half of this allocation and half of the allocation for CY2010 (see
section (iv)) will be distributed in this call.

To summarize:

We expect to distribute the following USQCD resources

QCDOC                  10.8 M 6n-node h
cluster                45.7 M 6n-node h (*)
BG/P                   18.1 M 6n-node h
XT4                    11.6 M 6n-node h
disc/tape storage    =  2.0 M 6n-node h
                      ------------------
              total  = 88.2 M 6n-node h (*)

(*) The e-mail version of this call incorrectly listed 35.0 M 6n-node h
(77.5 M 6n node-h) for the cluster (total) resource.

(see section (ii) for conversion factors to 6n-equivalent node hours)

                        - o -

The rest of this message deals with requests of Types A and B.
It is organized as follows:

   i)   policy directives regarding the usage of awarded resources;

   ii)  guidelines for the format of the proposals and deadline for
        submission;

   iii) procedures that will be followed to reach a consensus on the
        research programs and the allocations;

   iv)  policies for handling awards on leadership-class machines

   v)   description of USQCD resources at BNL, Fermilab, and JLAB


i) Policy directives.

1) This Call for Proposals is for calculations that will further the
physics goals of the USQCD Collaboration, as stated in the proposals for
funding submitted to the DOE (see http://www.usqcd.org/), and have the
potential of benefiting additional research projects by members of the
Collaboration.

2) Proposals of Type A are for investigations of very large scale,
which will require a substantial fraction of the available resources.
Proposals of Type B are for investigations of medium to large scale,
which will require a smaller amount of resources.  Proposals requesting
more than 500,000 6n-equivalent node-hours will be considered as
Type A, smaller ones as Type B.  For QCDOC, this dividing line is
6 weeks (1000 hours) on a 4096-node partition.

Proposals that request time on the leadership-class computers at Argonne
and Oak Ridge should be of Type A.

It is hoped that on USQCD hardware about 80% of the available resources
will be allocated to proposals of Type A and about 15% to proposals
of Type B, with the rest being reserved for small allocations and
contingencies.  Because our process is proposal-driven, however,
we cannot guarantee the 80-15-5 split.

3) Proposals of Type A are for investigations that benefit the whole
USQCD Collaboration.  Thus it is expected that the calculations will
either produce data, such as lattice gauge fields or quark propagators,
that can be used by the entire Collaboration, or that the calculations
produce physics results listed among the Collaboration's strategic
goals.

Accordingly, proponents planning to generate multi-purpose data must
describe in their proposal what data will be made available to the
whole Collaboration, and how soon, and specify clearly what physics
analyses they would like to perform in an "exclusive manner" on these
data (see below), and the expected time to complete them.

Similarly, proponents planning important physics analyses should
explain how the proposed work meets our strategic goals and how its
results would interest the broader physics community.

Projects generating multi-purpose data are clear candidates to use
USQCD's award(s) on leadership-class computers.  Therefore, these
proposals must provide additional information on several fronts:
they should

	demonstrate the potential to be of broad benefit, for example
	by providing a list of other projects that would use the shared
	data;

	present a roadmap for future planning, presenting, for example,
	criteria for deciding when to stop with one ensemble and start
	with another;

	discuss how they would cope with a substantial increase in
	allocated resources, from the portability of the code and
	storage needed to the availability of competent personnel
	to carry out the running;

Finally, it will be much easier to move multi-purpose projects to
the leadership-class computers  if broadly similar projects (e.g.,
with the same action, but different algorithms or parameters) are
coherent and unified.

Some projects carrying out strategic analyses are candidates for
running on the leadership-class machines.  They should provide the
same information as above.  Others are candidates for substantial
increases, should USQCD receive more resources during the allocation
year.  These proposals should include the same kind of information as
those above, except for matters of code portability, but including
any plans to move data generated on the leadership-class computers
(or similar resources) to USQCD storage.

4) Proposals of Type B are not required to share data or to work
towards stated Collaboration goals, although if they do that it is
a plus.  Type B proposals may also be scientifically valuable even if
not closely aligned with USQCD goals.  In that case the proposal should
contain a clear discussion of the physics motivations.  If appropriate,
Type B proposals may discuss data-sharing and strategic importance
as in the case of Type A proposals.

Proponents of Type B proposals are not required to give as many
details about how they would extend their running, should USQCD
obtain additional early user time through the current INCITE program
or otherwise.  If USQCD is indeed successful in such efforts, it is
likely that Type B awards will be increased at some time during the
allocation year.

5) The data that will be made available to the whole Collaboration
will have to be released promptly.  "Promptly" should be interpreted
with common sense.  Lattice gauge fields and propagators do not
have to be released as they are produced, especially if the group
is still testing the production environment.  On the other hand,
it is not considered reasonable to delay release of, say, 444 files,
just because the last 56 will not be available for a few months.

After a period during which such data will remain for the exclusive
use of the members of the USQCD Collaboration, and possibly of members
of other collaborations under reciprocal agreements, the data will
be made available worldwide as decided by the Executive Committee.

6) The USQCD Collaboration recognizes that the production of shared
data will generally entail a substantial amount of work by the
investigators generating the data.  They should therefore be given
priority in analyzing the data, particularly for their principal
physics interests.  Thus, proponents are encouraged to outline a set
of physics analyses that they would like to carry out with these data
in an exclusive manner and the amount of time that they would like
to reserve to themselves to complete such calculations.

When using the shared data, all other members of the USQCD
collaboration agree to respect such exclusivity.  Thus, they shall
refrain from using the data to reproduce the reserved or closely
similar analyses.  In its evaluation of the proposals the Scientific
Program Committee will in particular examine the requests for exclusive
use of the data and will ask the proposers to revise it in case the
request was found too broad or excessive in any other form.  Once an
accepted proposal has been posted on the Collaboration website, it
should be deemed by all parties that the request for exclusive use has
been accepted by the Scientific Program Committee.  Any dispute that
may arise in regards to the usage of such data will have to be directed
to the Scientific Program Committee for resolution and all members
of the Collaboration should abide by the decisions of this Committee.

7)  Usage of the USQCD software, developed under our SciDAC grants,
is recommended, but not required.  USQCD software is designed to
be efficient and portable, and its development leverages efforts
throughout the Collaboration.  If you use this software, the SPC can
be confident that your project can use USQCD resources efficiently.
Software developed outside the collaboration must be documented
to show that it performs efficiently on its target platform(s).
Information on portability is welcome, but not mandatory.

8) The investigators whose proposals have been selected by the
Scientific Program Committee for a possible award of USQCD resources
shall agree to have their proposals posted on a password protected
website, available only to our Collaboration, for consideration during
the All Hands' Meeting.

9) The investigators receiving an allocation of time following this
Call for Proposals must maintain a public web page that reasonably
documents their plans, progress, and the availability of data.
These pages should contain information that funding agencies and review
panels can use to determine whether USQCD is a well-run organization.
The public web page need not contain unpublished scientific results,
or other sensitive information.

The SPC will not accept new proposals from old projects that still
have no web page.  Please communicate the URL to mackenzie@fnal.gov


ii) Format of the proposals and deadline for submission.

The proposals should contain a title page with title, abstract
and the listing of all participating investigators.  The body,
including bibliography and embedded figures, should not exceed 12
pages in length for requests of Type A, and 10 pages in length for
requests of Type B, with font size of 11pt or larger.  If necessary,
further figures, with captions but without text, can be appended, for
a maximum of 8 additional pages.  CVs, publication lists and similar
personal information are not requested and should not be submitted.
Title page, proposal body and optional appended figures should be
submitted as a single pdf file, in an attachment to an e-mail message
sent to karsch@bnl.gov

The deadline for receipt of the proposals is Friday, March 20, 2009.

The last sentence of the abstract must state the total amount
of computer time for QCDOC in node-hours and/or for clusters and
leadership-class computers in 6n-equivalent node-hours (see below).
Proposals lacking this information will be returned without review
(but will be reviewed if the corrected proposal is returned quickly
and without other changes).

The body of the proposal should contain the following information,
if possible in the order below:

1) The physics goals of the calculation.

2) The computational strategy, including such details as gauge and
fermionic actions, parameters, computational methods.

3) The software used, including a description of the main algorithms
and the code base employed.  If you use USQCD software, it is not
necessary to document performance in the proposal.  If you use your own
code base, then the proposal should provide enough information to show
that it performs efficiently on its target platform(s).  Information
on portability is welcome, but not mandatory.  As feedback for the
software development team, proposals may include an explanation of
deficiencies of the USQCD software for carrying out the proposed work.

4) The amount of resources requested, for QCDOC in node-hours or for
clusters and leadership-class computers in 6n-equivalent node-hours.
Here one should also state which machine is most desirable and why,
and whether it is feasible or desirable to run some parts of the
proposed work on one machine, and other parts on another.

USQCD has clusters with several kinds of nodes, from single-processor,
single-core, to dual-processor, quad-core.  The Scientific Program
Committee will use the following table to convert:

	1 QCDOC node-hour = 0.122 6n node-hour
	1 pion  node-hour = 0.683 6n node-hour
	1  6n   node-hour = 1     6n node-hour
	1 kaon  node-hour = 1.757 6n node-hour
	1  7n   node-hour = 3.1   6n node-hour
        1 J/psi node-hour = 4.04  6n node-hour
        1 BG/P  node-hour = 1.08  6n node-hour
        1 XT4   node-hour = 1.16  6n node-hour  (*)

(*) The XT4 OS will be upgraded this month and new benchmarks will
   be run after that.  The conversion factor for the XT4 will then be
   updated  and posted on the WEB: http://lqcd.fnal.gov/performance.html

The above numbers are based on the average of asqtad and DWF fermion
inverters.  In the case of XT4 we used the average of asqtad and clover
inverters.  See http://lqcd.fnal.gov/performance.html for details.

The total request(s) on QCDOC and clusters should also be specified in
the last sentence of the proposal's abstract (see above).

Proposals of Type A should indicate longer-term computing needs here,
writing with an eye towards the possibility of additional computer
resources from programs such as INCITE.

In addition to CPU, proposals must specify how much mass storage
is needed.  The resources section of the proposal should state how
much existing storage is in use, and how much new storage is needed,
for disk and tape, in Tbytes.  In addition, please also restate the
storage request in 6n-equivalent node-hours, using the following
conversion factor, which reflect the current replacement costs for
disk storage and tapes:

       1 Tbyte disk = 13,470 6n-equivalent node-hour
       1 Tbyte tape =  1,347 6n-equivalent node-hour

Projects using disk storage will be charged 12% of these costs every
three months.  Projects will be charged for tape usage when a file is
written at the full cost of tape storage; when tape files are deleted,
they will receive a 40% refund of the charge.

Proposals should discuss whether these files will be used by one,
a few, or several project(s).  The cost for files (e.g., gauge
configurations) that are used by several projects will borne by USQCD
and not a specific physics project.  The charge for files used by a
single project will be deducted from the computing allocation: projects
are thus encouraged to figure out whether it is more cost-effective
to store or re-compute a file.  If a few (2-3) projects share a file,
they will share the charge.

5) What data will be made available to the entire Collaboration, and
the schedule for sharing it.

6) What calculations the investigators would like to perform in an
"exclusive manner" (see above in the section on policy directives),
and for how long they would like to reserve to themselves this
exclusive right.

iii) Procedure for the awards.

The Scientific Program Committee will receive proposals until the
deadline of Friday, March 20, 2009.

Proposals not stating the total request for QCDOC and clusters in
the last sentence of the abstract will be returned without review
(but will be reviewed if the corrected proposal is returned quickly
and without other changes).

Proposals that are considered meritorious and conforming to
the goals of the Collaboration will be posted on the web at
http://www.usqcd.org/, in the Collaboration's password-protected area.
Proposals recommended for awards in previous years can be found
there too.

The Scientific Program Committee (SPC) will make a preliminary
assessment of the proposals.  On April 26, 2009, the SPC will send a
report to the proponents raising any concerns about the proposal.

The proposals will be presented and discussed at the All Hands'
Meeting, May 14-15, 2009, at Fermilab.

Proposals of Type A will be allotted somewhat more time than
proposals of Type B, because they have to devote time to planning
and logistics, not just science.  If the SPC report raised serious
issues, the proponents may submit a revised proposal any time up to
the beginning of the All Hands' Meeting, and they should explain the
changes in the oral presentation.

A Collaboration discussion will follow the presentations of the Type
A proposals.  This is the opportunity for the whole Collaboration
to comment on the directions to be taken in making awards to these
proposals.  Particularly if we are fortunate enough to obtain further
computing resources on leadership-class machines, it is very important
that USQCD Collaborators guide the SPC in setting priorities among
the Type A proposals.  We shall need to determine not only the size
of the awards, but also which projects receive priority in proposals
to INCITE or to other bodies dispensing leadership-class time.

On the second day we will have presentations on proposals of Type B,
followed by another round of discussion.  As before, it is important to
the process for Collaborators to express their views on the relative
priority of the proposed projects.  The possibility of increasing
Type B allocations, should USQCD receive further computing resources,
makes this input more important than ever.

Following the All Hands' Meeting the SPC will determine a set of
recommendations on the awards.  The quality of the initial proposal,
the proponents' response to concerns raised in the written report,
and the views of the Collaboration expressed at the All Hands' Meeting
will all influence the outcome.  The SPC will send its recommendations
to the Executive Committee shortly after the All Hands' Meeting, and
inform the proponents once the recommendations have been accepted by
the Executive Committee.  The successful proposals and the size of
their awards will be posted on the web.

The new USQCD allocations will commence July 1, 2009.

Scientific publications describing calculations carried out with these
awards should acknowledge the use of USQCD resources, by including
the following sentence in the Acknowledgments:

"Computations for this work were carried out in part on facilities of
the USQCD Collaboration, which are funded by the Office of Science of
the U.S. Department of Energy."

Projects whose sole source of computing is USQCD should omit the phrase
"in part".


iv) INCITE award CY2009 and expected award for CY2010

Since 2007, USQCD policy has been to apply as a Collaboration for time
on the "leadership-class" computers, installed at Argonne and Oak Ridge
National Laboratories, and allocated through the DOE's INCITE Program
(see http://hpc.science.doe.gov/).  This strategy has been successful.
USQCD received a three-year INCITE grant (2008-2010).

For CY2009 USQCD was awarded 67 M core-hours on the BG/P at Argonne
and 20 M core-hours on the Cray XT4 at Oak Ridge.  Allocations for
the third year of the grant will be made after the review of progress
reports to be submitted in the summer of CY 2009.

Half of the CY2009 allocation has already been awarded by the SPC to
USQCD projects for the period 01/09-06/09.  The second half of this
award will be distributed as part of this call for proposals.

In CY2010 we expect an award of similar size.  However, we are
unlikely to learn our allocation for CY2010 until the end of the
current calendar year.

It is conceivable that USQCD will also receive additional
"discretionary" or "early user" time on the leadership-class computers.
The amount of this, however, is difficult to predict at present.
As in the past we will have to deal with these additional resources
by re-balancing the allocations made as a result of this call.

The timetable for INCITE is not synchronized with the USQCD allocation
year (starting each July 1).  For this reason, our process for large
awards looks ahead to the INCITE process.  We plan to award time on
leadership-class computers on the basis of the CY2009 INCITE award,
assuming that USQCD will receive a similar allocation in CY2010.
Half of this allocation will be frozen until the end of CY2009 and
will only be released after we know about details of the INCITE award
for CY2010.

Although USQCD received a three-year award for specific projects
detailed in the INCITE proposal, it has the freedom to shift priorities
when submitting the annual progress reports.  Basing requests for
additional resources and shifts in direction on proposals that
have been vetted scientifically by nearly all of the US lattice
QCD community should carry great weight.  When USQCD is granted
time for CY2010, the Executive Committee will consult with the
Scientific Program Committee to decide how much to increase the total
allocation(s) of these project(s).

v) USQCD computing resources.

The Scientific Program Committee will allocate 7200 hours/year to
Type A and Type B proposals.  Of the 8766 hours in an average year the
facilities are supposed to provide 8000 hours of uptime.  We then reserve
400 hours (i.e., 5%) for each host laboratory's own use, and another 400
hours for Type C proposals and contingencies.

At BNL:

QCDOC supercomputer 12,288 processors running at 400 MHz.
   The typical mode of operation is with a few large partitions, ranging
   from 1024 to 4096 nodes and possibly larger.  Requests should include
   a discussion of possible, as well as optimal, partition sizes for the
   desired physics.
	1 QCDOC node-hour = 0.122 6n node-hour

For further information see http://www.bnl.gov/lqcd/


At FNAL:

520 node cluster ("Pion")
   518 single-processor 3.2 GHz P4 nodes
   1 GB memory/node
   Infiniband network
   30 GB local scratch disk/node
   total: 7200*518*0.683 = 2,547,790 6n-equivalent node-hours
	1 pion  node-hour = 0.683 6n node-hour

600 node cluster ("Kaon")
   600 dual-core, dual-processor 2.0 GHz Opteron nodes
   (2400 total cpu cores available)
   4 GB memory/node
   Infiniband network
   88 GB local scratch disk/node
   total: 7200*600*1.757 =  7,591,110 6n-equivalent node-hours
	1 kaon  node-hour = 1.757 6n node-hour

856 node cluster ("J/Psi")
  856 quad-core, dual-socket 2.1 GHz Opteron nodes
  (6848 total cpu cores available)
  8 GB memory/node
  Infiniband network
  88 GB local scratch disk/node
  total: 7200*856*4.04 = 24,899,000 6n equivalent node-hours
     1 J/Psi node-hour = 4.04 6n node-hour

These clusters will share about 135 TBytes of disk space in a combination
of dCache and Lustre file systems, plus another approximately 20 TBytes in
conventional NFS-mounted storage.  We will add another 67 TBytes of Lustre
(or dCache) space by the end of the allocation year.  The cluster will have
access to ~ 750 TByte of tape storage.  The current maximum size of files
on tape is 400 GBytes.

For further information see http://www.usqcd.org/fnal/


At JLAB:

256 node Infiniband cluster ("6n")
 256 dual-core 3.0 GHz Pentium-D
 (512 total cpu cores available)
 1 GB memory/node
 Infiniband 4x fabric
 50 GB local scratch disk/node
 Users must run 2 processes per node, or run multi-threaded code.
 total: 7200*256*1     = 1,843,200 6n node-hours

396 node Infiniband cluster ("7n")
 396 quad-core, dual-processor 1.9 GHz Opteron (Barcelona)
 (3168 total cpu cores available)
 8 GB memory/node
 Infiniband 4x fabric
 50 GB local scratch disk/node
 Users must run 8 processes per node, or run multi-threaded code.
 total: 7200*396*3.1  = 8,838,720 6n node-hours
	1  7n   node-hour = 3.1   6n node-hour

These clusters will share 70 TBytes of associated disk storage
divided into a cache (backed by tape) and a work area (no backup) as
users choose.  Users have access to a large tape storage facility of
500 Tbytes.  The maximum size of tape files is currently 20 GB, but
could be increased if needed.

For further information see http://lqcd.jlab.org/

==================================||====================================

Back to Top


Andreas Kronfeld Legal Notices