USQCD Collaboration, April 18–19, 2014
Brookhaven LQCD Facility     Fermilab LQCD Facility     JLab LQCD Facility


All Hands' Home

Registration

USQCD Home


Call for Proposals

Agenda

2014 Proposals


USQCD Science


2014 Machine performance

DOE INCITE

Argonne ALCF

Oak Ridge OLCF


Reception

Visiting JLab


USQCD Home

2013 Meeting

2012 Meeting

2011 Meeting

2010 Meeting

2009 Meeting

2008 Meeting

2007 Meeting

2006 Meeting

2005 Meeting


Call for Proposals

Dear Colleagues,

This message is a Call for Proposals for awards of time on the USQCD
computer resources dedicated to lattice QCD and other lattice field
theories. These are the clusters at Fermilab and JLab, the
GPU-clusters at Fermilab and JLab, the BG/Q at BNL, and awards to
USQCD from the INCITE program. The awards will be for calculations
that further the scientific goals of the collaboration, as laid out in
the recent white papers and USQCD proposals that can be found
at http://www.usqcd.org/collaboration, noting that an important reason
for funding is relevance to the DOE experimental program.

In this allocation year, we expect to distribute about

                  71 M BG/Q core-hours at BNL 
                 397 M Jpsi-core hours on clusters at FNAL and JLAB
                 8.9 M GPU-hours on GPU clusters at FNAL and JLAB

                 100 M XK7 core-hours at Oak Ridge OLCF (*)
                 240 M BG/Q core-hours at Argonne ALCF (*)

                       Percentage of available zero priority time on the
                          BG/Q (**) at ALCF

                  32 M Jpsi-equivalent core-hours which we
                       expect to charge for disc and tape usage.


(*) estimate based on earlier CY2014 allocation; available in the first 
    few months of CY2015 only.
(**) only for the second half of CY2014

Further remarks on the nature of the INCITE award and additional
requirements for projects that apply for resources on leadership class
computers are given in section (iv).


All members of the USQCD Collaboration are eligible to submit
proposals.  Those interested in joining the Collaboration should
contact Paul Mackenzie (mackenzie@fnal.gov).


Let us begin with some important dates:
=======================================

      February   7: this Call for Proposals
      March     14: proposals due for Type A proposals
      April     11: reports to proponents sent out
      April  18/19: All Hands' Meeting at JLab
      May       31: allocations announced
                    NOTE: zero-priority at ALCF will start before 
                    July 1. (see section (iv))
      July       1: new allocations start

The Scientific Program Committee (SPC) will request some number of
presentations by the proponents of proposals at the All Hands'
Meeting.  Proponents may in general request to make an oral
presentation of their proposals; however, the logistical constraints
of the meeting may preclude some number of talks.

The web site for the All Hands' Meeting is

          http://www.usqcd.org/meetings/allHands2014/

The requests can be of three types:

  A) requests for potentially large amounts of time on USQCD
     dedicated resources and/or leadership class computers, to
     support calculations of benefit for the whole USQCD
     Collaboration and/or addressing critical scientific needs.
     There is no minimum size to the request. However, small
     requests will be not considered suitable for leadership
     resources. Allocations are for one year on USQCD resources.

  B) requests for medium amounts of time on USQCD dedicated
     resources intended to support calculations in an early stage of
     development which address, or have the potential to address,
     scientific needs of the collaboration;
        --- No maximum, but encouraged to be below 2.5 M
            Jpsi-equivalent core-hours on clusters, or below 100 K
            GPU hours on GPU clusters. No suggested size for
            BNL BG/Q requests ---
     Allocations are for up to 6 months.

  C) requests for exploratory calculations, such as those needed to
     develop and/or benchmark code, acquire expertise on the use of
     the machines, or to perform investigations of limited scope.
     The amount of time used by such projects should not exceed
     100 K Jpsi core-hours on clusters or 10 K GPU-hours on the
     GPU-clusters. Requests for BG/Q at BNL should be handled on a
     case basis.

Requests of Type A and B must be made in writing to the Scientific
Program Committee and are subject to the policies spelled out below.
These proposals must also specify the amount of disk and tape storage
needed.  Projects will be charged for new disks and tapes as well as
existing disk usage.  How this will be implemented is discussed in
section (iii).

Requests of Type B can be made anytime of the year, and will start in
the nearest month. Requests should be sent in an e-mail message to
Robert Edwards (edwards@jlab.org).

Requests of Type C should be made in an e-mail message to

  Paul Mackenzie (mackenzie@fnal.gov) for clusters at FNAL,

  Robert Mawhinney (rdm@physics.columbia.edu) for the BG/Q at BNL,

  Chip Watson (Chip.Watson@jlab.org) for clusters at JLAB.

Type B requests will be considered up to a total not exceeding 15% of
the available time on USQCD hardware.  Type C requests will be
considered up to a total not exceeding 5% of the available time on
USQCD hardware.  If the demand exceeds such limits, the Scientific
Program Committee will reconsider the procedures for access.

Collaboration members who wish to perform calculations on USQCD
hardware or on resources awarded to USQCD through the INCITE program
can present requests according to procedures specified below. The
Scientific Program Committee would like to handle requests and awards
on leadership class computers and cluster in their respective units,
namely Blue Gene core hours or Cray core hours. Requests on the GPU
clusters will be handled in GPU hours, and requests for the BG/Q will
be handled in BG/Q core hours.  Conversion factors for clusters, GPUs,
and leadership class computers are given below.  As projects usually
are not flexible enough to switch between running on GPUs, BG/Q, and
clusters, we choose to allocate in their respective units. However, as
nominal conversion factors are available, we describe at the end of
the document the total resources available to USQCD in TFlop-years.

                       - o -

The rest of this message deals with requests of Types A and B.  It is
organized as follows:

  i)   policy directives regarding the usage of awarded resources;

  ii)  guidelines for the format of the proposals and deadline for
       submission;

  iii) procedures that will be followed to reach a consensus on the
       research programs and the allocations;

  iv)  policies for handling awards on leadership-class machines

  v)   description of USQCD resources at Fermilab and JLAB


i) Policy directives.

1) This Call for Proposals is for calculations that will further the
physics goals of the USQCD Collaboration, as stated in the proposals
for funding submitted to the DOE (see http://www.usqcd.org/), and have
the potential of benefiting additional research projects by members of
the Collaboration. In particular, the scientific goals are described
in the science sections of the recent SciDAC proposals and in the
recent white papers, which are placed on the same web-site.  It has
always been assumed that the most important reason for the funding we
receive is our relevance to experiment.  This year we were told
explicitly by our DoE managers that our funding would depend on our
success in helping DoE experiments to succeed.

2) Proposals of Type A are for investigations of very large scale,
which may require a substantial fraction of the available resources.
Proposals of Type B are for investigations in an early stage of
development, and are medium to large scale which will require a
smaller amount of resources. There is no strict lower limit for
requests within Type A proposals, and there is no upper limit on Type
B Proposals. However, Type B requests for significantly more than 2.5
M Jpsi-equivalent core-hours on clusters or more than 100 K hours on
GPU-clusters, will receive significant scrutiny.

Proposals that request time on the leadership-class computers at
Argonne and Oak Ridge should be of Type A and should demonstrate that
they (i) can efficiently make use of large partitions of leadership
class computers, and (ii) will run more efficiently on leadership
class computers than on clusters.

3) All Type A and B proposals are expected to address the scientific
needs of the USQCD Collaboration.  Proposals of Type A are for
investigations that benefit the whole USQCD Collaboration.  Thus it is
expected that the calculations will either produce data, such as
lattice gauge fields or quark propagators, that can be used by the
entire Collaboration, or that the calculations produce physics results
listed among the Collaboration's strategic goals.

Accordingly, proponents planning to generate multi-purpose data must
describe in their proposal what data will be made available to the
whole Collaboration, and how soon, and specify clearly what physics
analyses they would like to perform in an "exclusive manner" on these
data (see below), and the expected time to complete them.

Similarly, proponents planning important physics analyses should
explain how the proposed work meets our strategic goals and how its
results would interest the broader physics community.

Projects generating multi-purpose data are clear candidates to use
USQCD's award(s) on leadership-class computers.  Therefore, these
proposals must provide additional information on several fronts: they
should

  demonstrate the potential to be of broad benefit, for example by
  providing a list of other projects that would use the shared data,
  or how the strategic scientific needs of USQCD are addressed;

  present a roadmap for future planning, presenting, for example,
  criteria for deciding when to stop with one ensemble and start with
  another;

  discuss how they would cope with a substantial increase in allocated
  resources, from the portability of the code and storage needed to
  the availability of competent personnel to carry out the running;


Some projects carrying out strategic analyses are candidates for
running on the leadership-class machines. They should provide the same
information as above.

4) Proposals of Type B are not required to share data, although if
they do so it is a plus.  Type B proposals may also be scientifically
valuable even if not closely aligned with USQCD goals.  In that case
the proposal should contain a clear discussion of the physics
motivations.  If appropriate, Type B proposals may discuss
data-sharing and strategic importance as in the case of Type A
proposals.

5) The data that will be made available to the whole Collaboration
will have to be released promptly.  "Promptly" should be interpreted
with common sense.  Lattice gauge fields and propagators do not have
to be released as they are produced, especially if the group is still
testing the production environment.  On the other hand, it is not
considered reasonable to delay release of, say, 444 files, just
because the last 56 will not be available for a few months.

After a period during which such data will remain for the exclusive
use of the members of the USQCD Collaboration, and possibly of members
of other collaborations under reciprocal agreements, the data will be
made available worldwide as decided by the Executive Committee.

6) The USQCD Collaboration recognizes that the production of shared
data will generally entail a substantial amount of work by the
investigators generating the data.  They should therefore be given
priority in analyzing the data, particularly for their principal
physics interests.  Thus, proponents are encouraged to outline a set
of physics analyses that they would like to carry out with these data
in an exclusive manner and the amount of time that they would like to
reserve to themselves to complete such calculations.

When using the shared data, all other members of the USQCD
collaboration agree to respect such exclusivity.  Thus, they shall
refrain from using the data to reproduce the reserved or closely
similar analyses.  In its evaluation of the proposals the Scientific
Program Committee will in particular examine the requests for
exclusive use of the data and will ask the proposers to revise it in
case the request was found too broad or excessive in any other form.
Once an accepted proposal has been posted on the Collaboration
website, it should be deemed by all parties that the request for
exclusive use has been accepted by the Scientific Program Committee.
Any dispute that may arise in regards to the usage of such data will
have to be directed to the Scientific Program Committee for resolution
and all members of the Collaboration should abide by the decisions of
this Committee.

7) Usage of the USQCD software, developed under our SciDAC grants, is
recommended, but not required.  USQCD software is designed to be
efficient and portable, and its development leverages efforts
throughout the Collaboration.  If you use this software, the SPC can
be confident that your project can use USQCD resources efficiently.
Software developed outside the collaboration must be documented to
show that it performs efficiently on its target platform(s).
Information on portability is welcome, but not mandatory.

8) The investigators whose proposals have been selected by the
Scientific Program Committee for a possible award of USQCD resources
shall agree to have their proposals posted on a password protected
website, available only to our Collaboration, for consideration during
the All Hands' Meeting.

9) The investigators receiving a Type A allocation of time following
this Call for Proposals must maintain a public web page that
reasonably documents their plans, progress, and the availability of
data.  These pages should contain information that funding agencies
and review panels can use to determine whether USQCD is a well-run
organization.  The public web page need not contain unpublished
scientific results, or other sensitive information.

The SPC will not accept new proposals from old projects that still
have no web page.  Please communicate the URL to mackenzie@fnal.gov


ii) Format of the proposals and deadline for submission.

The proposals should contain a title page with title, abstract and the
listing of all participating investigators.  The body, including
bibliography and embedded figures, should not exceed 12 pages in
length for requests of Type A, and 10 pages in length for requests of
Type B, with font size of 11pt or larger.  If necessary, further
figures, with captions but without text, can be appended, for a
maximum of 8 additional pages.  CVs, publication lists and similar
personal information are not requested and should not be submitted.
Title page, proposal body and optional appended figures should be
submitted as a single pdf file, in an attachment to an e-mail message
sent to edwards@jlab.org

The deadline for receipt of Type A proposals is Friday, March 14, 2014.

The last sentence of the abstract must state the total amount of
computer time in Jpsi-equivalent core-hours for clusters, GPU-clusters
in GPU-hours, and in BG/Q core hours for those machines. Proposals lacking
this information will be returned without review (but will be reviewed
if the corrected proposal is returned quickly and without other
changes).

The body of the proposal should contain the following information, if
possible in the order below:

1) The physics goals of the calculation.

2) The computational strategy, including such details as gauge and
fermionic actions, parameters, computational methods.

3) The software used, including a description of the main algorithms
and the code base employed.  If you use USQCD software, it is not
necessary to document performance in the proposal.  If you use your
own code base, then the proposal should provide enough information to
show that it performs efficiently on its target platform(s).
Information on portability is welcome, but not mandatory.  As feedback
for the software development team, proposals may include an
explanation of deficiencies of the USQCD software for carrying out the
proposed work.

4) The amount and type of resources requested. Here one should also
state which machine is most desirable and why, and whether it is
feasible or desirable to run some parts of the proposed work on one
machine, and other parts on another.  If relevant, proposals of Type A
should indicate longer-term computing needs here.

The Scientific Program Committee will use the following table to convert:

      1 J/psi    core-hour = 1      Jpsi core-hour
      1 Ds       core-hour = 1.33   Jpsi core-hour
      1 9q       core-hour = 2.2    Jpsi core-hour
      1 10q      core-hour = 2.3    Jpsi core-hour
      1 12s      core-hour = 2.3    Jpsi core-hour
      1 XK7      core-hour = 1.0    Jpsi core-hour
      1 BG/Q     core-hour = 1.64   Jpsi core-hour
      1 C2050    GPU hour  = 82     Jpsi equivalent core-hour
      1 K20      GPU hour  = 164    Jpsi equivalent core-hour
      1 Phi      MIC hour  = 164    Jpsi equivalent core-hour
      1 Jpsi     core-hour = 1.22   GFlop/sec-hour

The above numbers are based on appropriate averages of asqtad, DWF
fermion inverters, and Clover inverters. In the case of XK7
performance is based on a Clover inverter run on the GPUs at
leadership scale. The conversion of GPU to Jpsi is based on the
average of application performance on user jobs across all GPU systems
at FNAL and JLab (including gamer as well as non-gamer cards).  See
http://lqcd.fnal.gov/performance.html for details.

In addition to CPU, proposals must specify how much mass storage is
needed.  The resources section of the proposal should state how much
existing storage is in use, and how much new storage is needed, for
disk and tape, in Tbytes.  In addition, please also restate the
storage request in Jpsi-equivalent core-hours, using the following
conversion factor, which reflect the current replacement costs for
disk storage and tapes:

      1 Tbyte disk =  20 K Jpsi-equivalent core-hour
      1 Tbyte tape =   3 K Jpsi-equivalent core-hour

Projects using disk storage will be charged 25% of these costs every
three months. Projects will be charged for tape usage when a file is
written at the full cost of tape storage; when tape files are deleted,
they will receive a 40% refund of the charge.

Proposals should discuss whether these files will be used by one, a
few, or several project(s).  The cost for files (e.g., gauge
configurations) that are used by several projects will borne by USQCD
and not a specific physics project.  The charge for files used by a
single project will be deducted from the computing allocation:
projects are thus encouraged to figure out whether it is more
cost-effective to store or re-compute a file.  If a few (2-3) projects
share a file, they will share the charge.

5) If relevant, what data will be made available to the entire
Collaboration, and the schedule for sharing it.

6) What calculations the investigators would like to perform in an
"exclusive manner" (see above in the section on policy directives),
and for how long they would like to reserve to themselves this
exclusive right.

iii) Procedure for the awards.

The Scientific Program Committee will receive proposals until the
deadline of Friday, March 14, 2014.  Proposals not stating the total
request in the last sentence of the abstract will be returned without
review (but will be reviewed if the corrected proposal is returned
quickly and without other changes).

Proposals that are considered meritorious and conforming to the goals
of the Collaboration will be posted on the web at
http://www.usqcd.org/, in the Collaboration's password-protected area.
Proposals recommended for awards in previous years can be found there
too.

The Scientific Program Committee (SPC) will make a preliminary
assessment of the proposals.  On April 11, 2013, the SPC will send a
report to the proponents raising any concerns about the proposal.

A few proposals will be presented and discussed at the All Hands'
Meeting, April 18-19, 2013, at JLAB.  Following the All Hands' Meeting
the SPC will determine a set of recommendations on the awards. The
quality of the initial proposal, the proponents' response to concerns
raised in the written report, and the views of the Collaboration
expressed at the All Hands' Meeting will all influence the outcome.
The SPC will send its recommendations to the Executive Committee after
the All Hands' Meeting, and inform the proponents once the
recommendations have been accepted by the Executive Committee.  The
successful proposals and the size of their awards will be posted on
the web.

The new USQCD allocations will commence July 1, 2014.

Scientific publications describing calculations carried out with these
awards should acknowledge the use of USQCD resources, by including the
following sentence in the Acknowledgments:

"Computations for this work were carried out in part on facilities of
the USQCD Collaboration, which are funded by the Office of Science of
the U.S. Department of Energy."

Projects whose sole source of computing is USQCD should omit the
phrase "in part".


iv) INCITE award CY2014/2015 and zero priority time at Argonne

Since 2007, USQCD policy has been to apply as a Collaboration for time
on the "leadership-class" computers, installed at Argonne and Oak
Ridge National Laboratories, and allocated through the DOE's INCITE
Program (see http://hpc.science.doe.gov/). The first two successful
three-year INCITE grant period ended 12/2010 and 12/2013,
respectively. A new three-year grant proposal has been successful and
received funding in CY2014.

For CY2014 USQCD was awarded 240 M BG/Q core-hours on the BG/Q at
Argonne, and 100 M XK7 core-hours on the Cray XK7 at Oak Ridge. We
anticipate receiving a similar allocation in CY2015.

In accordance with observed usage patterns, we will distribute the
entire regular INCITE at ANL that is available on 01/15. However, we
expect this time to be consumed quickly - in the first quarter of the
year. Thus, there is no regular INCITE time at ANL available later in
the year. Similarly, we will distribute the entire regular INCITE time
at ORNL available on 01/15 and expect it to be consumed in the first
half of the year.

In addition we expect to receive in CY2014 zero-priority time on
the BG/Q at Argonne.  Based on previous usage and availability, 
we will distribute zero-priority time starting in 2014 as soon as our
INCITE allocation has been consumed and zero-priority time becomes 
available.  This is expected in April 2014.
As the total amount of time is not reliably estimated
at this time, we will only assign percentages of zero-priority
usage. The SPC may readjust these percentage allocations based upon
observed usage. The Oak Ridge facility does not provide a
zero-priority queue.

The usage of the INCITE allocations should be monitored by all PIs of
INCITE projects on the USQCD WEB-page:

        http://www.mcs.anl.gov/~osborn/usqcd-spc/2013-14-mira.html


v) USQCD computing resources.

The Scientific Program Committee will allocate 7200 hours/year to Type
A and Type B proposals.  Of the 8766 hours in an average year the
facilities are supposed to provide 8000 hours of uptime.  We then
reserve 400 hours (i.e., 5%) for each host laboratory's own use, and
another 400 hours for Type C proposals and contingencies.


==================================
At BNL:

60% of a 1024 node BG/Q rack
  16 cores/node, up to 4 threads per core
  16 GB memory/node
  10% of a BNL rack with time donated to USQCD.
  50% of a rack owned by USQCD.
     total: 7200*1024*16*0.60 = 70.8 M BG/Q core-hours

The front-end has more than 100 TBytes of disk space available. There
is no tape access readily available. However, more temporary storage
on disk can be provided during active running on the BG/Q.


==================================
At FNAL:

418 node cluster ("Ds")
  Eight-core, dual-socket 2.0 GHz AMD Opteron (Magny-Cours) nodes
  32 cores per node
  64 GB memory/node
  1 Ds core-hour =  1.33 Jpsi-equivalent core-hour
      total: 7200*421*32*1.33 = 128 M Jpsi-equivalent core-hours

224 node cluster ("Bc")
  Eight-core, quad-socket 2.8 GHz AMD Opteron (Abu Dhabi) nodes
  32 cores per node
  64 GB memory/node
  1 core-hour = 1.48 JPsi-equivalent core-hours
      total: 7200*224*32*1.48 = 76.4 M Jpsi-equivalent core-hours

76 node GPU cluster ("Dsg")
  Quad-core, dual-socket Intel E5630 nodes
  48 GB memory/node
  2 GPUs NVIDIA M2050 (Fermi Tesla) per node
  (152 total GPUs available)
  GPU memory (ECC on) 2.7 GB / GPU
      total: 7200*152 =  1094 K GPU-hours

190 node cluster (FY14 capacity, name TBD, estimate)
  Eight-core, dual-socket 2.6 GHz Intel Xeon (Ivy Bridge) nodes
  16 cores per node
  64 GB memory/node
  1 core-hour = 2.96 JPsi-equivalent core-hours
  Running from about Sept 1 (10 months)
      total: 10/12*7200*190*16*2.96 = 54.0 M Jpsi-equivalent core-hours

38 node cluster (FY14 gpu, name TBD, estimate)
  Eight-core, dual-socket 2.6 GHz Intel Xeon (Ivy Bridge) nodes
  64 GB memory/node
  4 GPUs NVIDIA K20x (Kepler Tesla) per node
  (152 total GPUs available)
  GPU memory (ECC on) 4.8 GB/GPU
  Running from about Sept 1 (10 months)
      total: 10/12*7200*152 = 912 K GPU-hours
  (Possible alternative: 30 nodes each with 4 NVIDIA K40,
      11.5 GB/GPU, total = 10/12*7200*120 = 720 K GPU-hours)

These clusters will share about 1000 TBytes of disk space in Lustre
file systems. Tape access is also available.

For further information see http://www.usqcd.org/fnal/

==================================
At JLAB:

320 node cluster ("9q")
  Quad-core, dual-processor  2.4 GHz Intel Nehalem nodes
  8 cores per node
  24 GB memory/node, QDR IB fabric in partitions of up to 128 nodes
  1 9q core-hour  =  2.2 Jpsi-equivalent core-hours
      total: 7200*320*8*2.2 =   40.5 M  Jpsi-equivalent core-hours

192 node cluster ("10q")
  Quad-core, dual-processor  2.53 GHz Intel Westmere nodes
  8 cores per node
  24 GB memory/node, QDR IB fabric in partitions of 32 nodes
  1 10q core-hour =  2.3 Jpsi-equivalent core-hours
      total: 7200*192*8*2.3 =   25.4 M  Jpsi-equivalent core-hours

276 node cluster ("12s")
  Eight-core, dual processor Intel Sandy Bridge nodes
  16 cores per node
  32 GB memory/node
  QDR network card, with full bi-sectional bandwidth network fabric
  1 12s core-hour = 2.3 Jpsi cores
      total: 7200*276*16*2.3 =  73.1 M  Jpsi-equivalent core hours 

147 node GPU cluster at JLab  ("9g", "10g", "12k")
  32 nodes with 4 NVIDIA C2050/M2050 (Fermi Tesla) GPUs = 128
  18 nodes with 4 GTX-480 (Fermi gamer) GPUs (1 GTX-480 = 1.55 C2050) = 111.6
  23 nodes with 4 GTX-580 (Fermi gamer) GPUs (1 GTX-580 = 1.70 C2050) = 156.4
  32 nodes with 2 dual GTX-690 (Fermi gamer) GPUs (1 GTX-690 = 2 @ 1.80 C2050) =  230.4 
  42 nodes with 4 NVIDIA K20m (Kepler Tesla) GPUs (1 K20m = 2 C2050) = 336
  (588 total GPUs available: in C2050 units -> 962 GPUs total)
      total: 7200*962 = 6.9 M GPU hours

12 node MIC cluster at JLab  ("12m")
  12 node cluster equipped with 4 Intel Xeon Phi 5110P MICs
  (48 total MICs available)
      total: 7200*48  = 345 K MIC hours

For further information see also http://lqcd.jlab.org . Machine
descriptions can be found at

https://wiki.jlab.org/cc/external/wiki/index.php/New_Users_Start_Here

At JLAB, the systems will have access to about 800 TBytes of disk
space. Tape access is also available.

==============================================================
Resource estimates

Based upon the performance conversions used above, the total resources
available in this call are shown below. The INCITE time is based on an
estimated allocation for CY2015 and does not include time from CY2015.

BG/Q (BNL):         116 M Jpsi ->  16 TF-yr
Clusters (FNAL):    258 M Jpsi ->  36 TF-yr
Clusters (JLab):    139 M Jpsi ->  19 TF-yr
Fermi GPUs (FNAL):  164 M Jpsi ->  23 TF-yr
Fermi GPUs (JLab):  566 M Jpsi ->  79 TF-yr
Intel MIC (JLab):    56 M Jpsi ->   8 TF-yr
Total (USQCD):     1244 M Jpsi -> 173 TF-yr

OLCF (GPU):         100 M Jpsi ->  14 TF-yr
ALCF:               394 M Jpsi ->  55 TF-yr
Total (INCITE):     494 M Jpsi ->  69 TF-yr

Back to Top


Andreas Kronfeld Legal Notices