All Hands' Meeting 2015
USQCD Collaboration Meeting
Fermi National Accelerator Laboratory
May 1–2, 2015
Brookhaven LQCD Facility     Fermilab LQCD Facility     JLab LQCD Facility


All Hands' Home

Registration

USQCD Home


Call for Proposals

Agenda

2015 Proposals


USQCD Science


2015 Machine performance

DOE INCITE

Argonne ALCF

Oak Ridge OLCF


Social Dinner

Hotels

Visiting Fermilab


USQCD Home

2014 Meeting

2013 Meeting

2012 Meeting

2011 Meeting

2010 Meeting

2009 Meeting

2008 Meeting

2007 Meeting

2006 Meeting

2005 Meeting


Call for Proposals

From: Anna Hasenfratz 
Subject: Call for Proposals
Date: February 6, 2015 at 7:28:03 AM CST
To: 

Dear Colleagues,

This message is a Call for Proposals for awards of time on the
USQCD computer resources dedicated to lattice QCD and other lattice
field theories. These are the clusters at Fermilab and JLab,
the GPU-clusters at Fermilab and JLab, the BG/Q at BNL, and awards
to USQCD from the INCITE program. The awards will be for calculations
that further the scientific goals of the collaboration, as laid out in
the recent white papers and USQCD proposals that can be found at
http://www.usqcd.org/collaboration, 
noting that an important reason for funding is relevance to the DOE
experimental program.

In this allocation year, we expect to distribute about

                71 M BG/Q core-hours at BNL 
               451 M Jpsi-core hours on clusters at FNAL and JLAB
               9.5 M GPU-hours on GPU clusters at FNAL and JLAB

               100 M XK7 core-hours at Oak Ridge OLCF (*)
               180 M BG/Q core-hours at Argonne ALCF (*)

              Percentage of available zero priority time on the
                          BG/Q (**) at ALCF

                32 M Jpsi-equivalent core-hours which we
                       expect to charge for disc and tape
usage.

(*) Estimate based on earlier CY2014 allocation; available in the first
few months of CY2016. only.
(**) For the USQCD allocation year July 1 2015 through June 30 2016.

There is one further USQCD resource, the USQCD allocation on Blue Waters,
that is not yet generally available as part of this call for proposals.
 A new three-year Blue Waters proposal is due later this year.  The
future of the USQCD Blue Waters allocation and the best steps forward are
currently under active discussion by the Executive Committee.

Further remarks on the nature of the INCITE award and additional
requirements for projects that apply for resources on leadership
class computers are given in section (iv).

All members of the USQCD Collaboration are eligible to submit proposals.
 Those interested in joining the Collaboration should contact Paul
Mackenzie (mackenzie@fnal.gov).

Let us begin with some important dates:
=======================================

      February   6: this Call for Proposals
      March     13: proposals due for Type A proposals
      April     10: reports to proponents sent out
      May      1/2: All Hands' Meeting at FNAL (ending ~5pm)
      May       31: allocations announced
                    NOTE: zero-priority at ALCF will start
      
                    before July 1. (see section (iv))
      July       1: new allocations start

The Scientific Program Committee (SPC) will request some number
of presentations by the proponents of proposals at the All
Hands’ Meeting.  Proponents may in general request to make an
oral presentation of their proposals; however, the logistical
constraints of the meeting may preclude some number of talks.
 The web site for the All Hands' Meeting is
          http://www.usqcd.org/meetings/allHands2015/

The requests can be of three types:

  A) requests for potentially large amounts of time on USQCD  
     dedicated resources and/or leadership class computers, to
     support calculations of benefit for the whole USQCD
     Collaboration and/or addressing critical scientific needs.
     There is no minimum size to the request. However, small
     requests will be not considered suitable for leadership
     resources. Allocations are for one year on USQCD resources.

  B) requests for medium amounts of time on USQCD dedicated
     resources intended to support calculations in an early stage 
     of development which address, or have the potential to
         
     address,scientific needs of the collaboration;
        --- No maximum, but encouraged to be below 2.5 M
            Jpsi-equivalent core-hours or less on clusters, 
            or 100 K GPU hours or less on GPU clusters. 
            No suggested size for BNL BG/Q requests ---
     Allocations are for up to 6 months.

  C) requests for exploratory calculations, such as those needed
     
     to develop and/or benchmark code, acquire expertise on the 
     use of the machines, or to perform investigations of limited  
     scope.
     The amount of time used by such projects should not exceed
     100 K Jpsi core-hours on clusters or 10 K GPU-hours on the
     GPU-clusters. Requests for BG/Q at BNL should be handled on 
     a case basis.

Requests of Type A and B must be made in writing to the
Scientific Program Committee and are subject to the policies spelled out
below.
These proposals must also specify the amount of disk and tape
storage needed.  Projects will be charged for new disks and tapes as
well as existing disk usage.  How this will be implemented is discussed
in section (iii).

Requests of Type B can be made anytime of the year, and will start in the
nearest month. Requests should be sent in an e-mail message to
Anna Hasenfratz (anna@eotvos.colorado.edu).

Requests of Type C should be made in an e-mail message to
  Paul Mackenzie (mackenzie@fnal.gov) for clusters at FNAL,
  Robert Mawhinney (rdm@physics.columbia.edu) for the BG/Q at BNL,
  Chip Watson (Chip.Watson@jlab.org) for clusters at JLAB.

Type B requests will be considered up to a total not exceeding 15% of the
available time on USQCD hardware.  Type C requests will be considered up
to a total not exceeding 5% of the available time on USQCD hardware.  If
the demand exceeds such limits, the Scientific Program Committee will
reconsider the procedures for access.

Collaboration members who wish to perform calculations on USQCD hardware
or on resources awarded to USQCD through the INCITE program can present
requests according to procedures specified below. The Scientific Program
Committee would like to handle requests and awards on leadership class
computers and cluster in their respective units, namely Blue Gene core
hours or Cray core hours. Requests for the BG/Q will be handled in BG/Q
core hours, and requests on the GPU clusters will be handled in GPU
hours.  Conversion factors for clusters, GPUs, and leadership class
computers are given below. As projects usually are not flexible enough to
switch between running on GPUs, BG/Q, and clusters, we choose to allocate
in their respective units. In addition, since the various GPU clusters
have quite different properties, it may be useful if proposals asking for
GPU time included a preference, if any, for particular USQCD GPU.
 However, as nominal conversion factors are available, we describe at
the end of the document the total resources available to USQCD in
TFlop-years.

                       - o -

The rest of this message deals with requests of Types A and B.  It
is organized as follows:

  i)   policy directives regarding the usage of awarded resources;

  ii)  guidelines for the format of the proposals and deadline
for submission;

  iii) procedures that will be followed to reach a consensus on
the research programs and the allocations;

  iv)  policies for handling awards on leadership-class machines

  v)   description of USQCD resources at Fermilab and JLAB
 

i) Policy directives.

1) This Call for Proposals is for calculations that will further
the physics goals of the USQCD Collaboration, as stated in the
proposals for funding submitted to the DOE (see http://www.usqcd.org/),
and have the potential of benefiting additional research projects by
members of the Collaboration. In particular, the scientific goals are
described in the science sections of the recent SciDAC proposals and in
the recent white papers, which are placed on the same web-site.  It is
important to our success in continued funding that we
demonstrate continued importance in helping DoE experiments to succeed. 

2) Proposals of Type A are for investigations of very large scale,which
may require a substantial fraction of the available resources. Proposals
of Type B are for investigations in an early stage of development, and
are medium to large scale which will require a smaller amount of
resources. There is no strict lower limit for requests within Type A
proposals, and there is no upper limit on Type B Proposals. However, Type
B requests for significantly more than 2.5 M Jpsi-equivalent core-hours
on clusters or more than 100 K hours on GPU-clusters, will receive
significant scrutiny.
 Proposals that request time on the leadership-class computers
at Argonne and Oak Ridge should be of Type A and should demonstrate
that they (i) can efficiently make use of large partitions of
leadership class computers, and (ii) will run more efficiently on
leadership class computers than on clusters.

3) All Type A and B proposals are expected to address the
scientific needs of the USQCD Collaboration.  Proposals of Type A are
for investigations that benefit the whole USQCD Collaboration.  Thus it
is expected that the calculations will either produce data, such
as lattice gauge fields or quark propagators, that can be used by
the entire Collaboration, or that the calculations produce physics
results listed among the Collaboration's strategic goals.
 Accordingly, proponents planning to generate multi-purpose data
must describe in their proposal what data will be made available to
the whole Collaboration, and how soon, and specify clearly what
physics analyses they would like to perform in an "exclusive manner" on
these data (see below), and the expected time to complete them.
 Similarly, proponents planning important physics analyses
should explain how the proposed work meets our strategic goals and how
its results would interest the broader physics community.
 Projects generating multi-purpose data are clear candidates to
use USQCD's award(s) on leadership-class computers.  Therefore,
these proposals must provide additional information on several fronts:
they should
   - demonstrate the potential to be of broad benefit, for example
by providing a list of other projects that would use the shared data, or
how the strategic scientific needs of USQCD are addressed;
   - present a roadmap for future planning, presenting, for
example, criteria for deciding when to stop with one ensemble and start
with another;
   - discuss how they would cope with a substantial increase in
allocated resources, from the portability of the code and storage needed
to the availability of competent personnel to carry out the running;

Some projects carrying out strategic analyses are candidates for running
on the leadership-class machines. They should provide the
same information as above.

4) Proposals of Type B are not required to share data, although if they
do so it is a plus.  Type B proposals may also be
scientifically valuable even if not closely aligned with USQCD goals.
 In that case the proposal should contain a clear discussion of the
physics motivations.  If appropriate, Type B proposals may
discuss data-sharing and strategic importance as in the case of Type
A proposals.

5) The data that will be made available to the whole Collaboration will
have to be released promptly.  "Promptly" should be interpreted with
common sense.  Lattice gauge fields and propagators do not have to be
released as they are produced, especially if the group is still testing
the production environment.  On the other hand, it is not considered
reasonable to delay release of, say, 444 files, just because the last 56
will not be available for a few months.
 After a period during which such data will remain for the exclusive use
of the members of the USQCD Collaboration, and possibly of members of
other collaborations under reciprocal agreements, the data will be made
available worldwide as decided by the Executive Committee.

6) The USQCD Collaboration recognizes that the production of shared data
will generally entail a substantial amount of work by the investigators
generating the data.  They should therefore be given priority in
analyzing the data, particularly for their principal physics interests.
 Thus, proponents are encouraged to outline a set of physics analyses
that they would like to carry out with these data in an exclusive manner
and the amount of time that they would like to reserve to themselves to
complete such calculations.
 When using the shared data, all other members of the
USQCD collaboration agree to respect such exclusivity.  Thus, they
shall refrain from using the data to reproduce the reserved or
closely similar analyses.  In its evaluation of the proposals the
Scientific Program Committee will in particular examine the requests
for exclusive use of the data and will ask the proposers to revise it
in case the request was found too broad or excessive in any other
form. Once an accepted proposal has been posted on the
Collaboration website, it should be deemed by all parties that the
request for exclusive use has been accepted by the Scientific Program
Committee. Any dispute that may arise in regards to the usage of such
data will have to be directed to the Scientific Program Committee for
resolution and all members of the Collaboration should abide by the
decisions of this Committee.

7) Usage of the USQCD software, developed under our SciDAC grants,
is recommended, but not required.  USQCD software is designed to
be efficient and portable, and its development leverages
efforts throughout the Collaboration.  If you use this software, the SPC
can be confident that your project can use USQCD resources
efficiently. Software developed outside the collaboration must be
documented to show that it performs efficiently on its target
platform(s). Information on portability is welcome, but not mandatory.

8) The investigators whose proposals have been selected by the Scientific
Program Committee for a possible award of USQCD resources shall agree to
have their proposals posted on a password protected website, available
only to our Collaboration, for consideration during the All Hands'
Meeting.

9) The investigators receiving a Type A allocation of time following this
Call for Proposals must maintain a public web page that reasonably
documents their plans, progress, and the availability of data.  These
pages should contain information that funding agencies and review panels
can use to determine whether USQCD is a well-run organization.  The
public web page need not contain unpublished scientific results, or other
sensitive information.

The SPC will not accept new proposals from old projects that still have
no web page.  Please communicate the URL to mackenzie@fnal.gov

ii) Format of the proposals and deadline for submission.

The proposals should contain a title page with title, abstract and
the listing of all participating investigators.  The body,
including bibliography and embedded figures, should not exceed 12 pages
in length for requests of Type A, and 10 pages in length for requests
of Type B, with font size of 11pt or larger.  If necessary,
further figures, with captions but without text, can be appended, for
a maximum of 8 additional pages.  CVs, publication lists and
similar personal information are not requested and should not be
submitted. Title page, proposal body and optional appended figures should
be submitted as a single pdf file, in an attachment to an e-mail
message sent to anna@eotvos.colorado.edu

The deadline for receipt of Type A proposals is Friday, March 13, 2015.

The last sentence of the abstract must state the total amount of computer
time in Jpsi-equivalent core-hours for clusters, GPU-clusters in
GPU-hours, and in BG/Q core hours for those machines. Proposals
lacking this information will be returned without review (but will be
reviewed if the corrected proposal is returned quickly and without
other changes).

The body of the proposal should contain the following information,
if possible in the order below:

1) The physics goals of the calculation.
2) The computational strategy, including such details as gauge
and fermionic actions, parameters, computational methods.
3) The software used, including a description of the main algorithms and
the code base employed.  If you use USQCD software, it is not necessary
to document performance in the proposal.  If you use your own code base,
then the proposal should provide enough information to show that it
performs efficiently on its target platform(s).
Information on portability is welcome, but not mandatory.  As
feedback for the software development team, proposals may include
an explanation of deficiencies of the USQCD software for carrying out
the proposed work.

4) The amount and type of resources requested. Here one should also state
which machine is most desirable and why, and whether it is feasible or
desirable to run some parts of the proposed work on one machine, and
other parts on another.  If relevant, proposals of Type A should
indicate longer-term computing needs here.

The Scientific Program Committee will use the following table to convert:

      1 J/psi    core-hour = 1      Jpsi core-hour
      1 Ds       core-hour = 1.33   Jpsi core-hour
      1 9q       core-hour = 2.2    Jpsi core-hour
      1 10q      core-hour = 2.3    Jpsi core-hour
      1 12s      core-hour = 2.3    Jpsi core-hour
      1 XK7      core-hour = 1.0    Jpsi core-hour
      1 BG/Q     core-hour = 1.64   Jpsi core-hour
      1 C2050    GPU hour  = 82     Jpsi equivalent core-hour
      1 K20      GPU hour  = 172    Jpsi equivalent core-hour
      1 K40      GPU hour  = 224    Jpsi equivalent core-hour
      1 Phi      MIC hour  = 164    Jpsi equivalent core-hour
      1 Jpsi     core-hour = 1.22   GFlop/sec-hour

The above numbers are based on appropriate averages of asqtad,
DWF fermion inverters, and Clover inverters. In the case of
XK7 performance is based on a Clover inverter run on the GPUs
at leadership scale. The conversion of GPU to Jpsi is based on
the average of application performance on user jobs across all GPU
systems at FNAL and JLab (including gamer as well as non-gamer cards).
 See
http://lqcd.fnal.gov/performance.html for details.

In addition to CPU, proposals must specify how much mass storage
is needed.  The resources section of the proposal should state how
much existing storage is in use, and how much new storage is needed,
for disk and tape, in Tbytes.  In addition, please also restate
the storage request in Jpsi-equivalent core-hours, using the
following conversion factor, which reflect the current replacement costs
for disk storage and tapes:

      1 Tbyte disk =  20 K Jpsi-equivalent core-hour
      1 Tbyte tape =   3 K Jpsi-equivalent core-hour

Projects using disk storage will be charged 25% of these costs
every three months. Projects will be charged for tape usage when a file
is written at the full cost of tape storage; when tape files are
deleted, they will receive a 40% refund of the charge.
 Proposals should discuss whether these files will be used by one,
a few, or several project(s).  The cost for files (e.g.,
gauge configurations) that are used by several projects will borne by
USQCD and not a specific physics project.  The charge for files used by
a single project will be deducted from the computing
allocation: projects are thus encouraged to figure out whether it is
more cost-effective to store or re-compute a file.  If a few (2-3)
projects share a file, they will share the charge.

5) If relevant, what data will be made available to the
entire Collaboration, and the schedule for sharing it.

6) What calculations the investigators would like to perform in
an "exclusive manner" (see above in the section on policy
directives), and for how long they would like to reserve to themselves
this exclusive right.

iii) Procedure for the awards.

The Scientific Program Committee will receive proposals until
the deadline of Friday, March 13, 2015.  Proposals not stating the
total request in the last sentence of the abstract will be returned
without review (but will be reviewed if the corrected proposal is
returned quickly and without other changes).

Proposals that are considered meritorious and conforming to the goals of
the Collaboration will be posted on the web at http://www.usqcd.org/, in
the Collaboration's password-protected area.
Proposals recommended for awards in previous years can be found there too.

The Scientific Program Committee (SPC) will make a preliminary assessment
of the proposals.  On April 10, 2015, the SPC will send a report to the
proponents raising any concerns about the proposal.

A few proposals will be presented and discussed at the All Hands'Meeting,
May 1-2, 2015, at FNAL; 

Following the All Hands' Meeting the SPC will determine a set
of recommendations on the awards. The quality of the initial
proposal, the proponents' response to concerns raised in the written
report, and the views of the Collaboration expressed at the All
Hands’ Meeting will all influence the outcome.  The SPC will send its
recommendations to the Executive Committee after the All Hands' Meeting,
and inform the proponents once the recommendations have been accepted by
the Executive Committee.  The successful proposals and the size of
their awards will be posted on the web.

The new USQCD allocations will commence July 1, 2015.

Scientific publications describing calculations carried out with
these awards should acknowledge the use of USQCD resources, by including
the following sentence in the Acknowledgments:

"Computations for this work were carried out in part on facilities of the
USQCD Collaboration, which are funded by the Office of Science of the
U.S. Department of Energy."

Projects whose sole source of computing is USQCD should omit the phrase
"in part".

iv) INCITE award CY2015/2016 and zero priority time at Argonne

Since 2007, USQCD policy has been to apply as a Collaboration for time on
the "leadership-class" computers, installed at Argonne and Oak Ridge
National Laboratories, and allocated through the DOE's INCITE Program
(see http://hpc.science.doe.gov/).  The first two successful three-year
 INCITE grant periods ended 12/2010 and 12/2013.  A third, three-year
grant proposal was successful and began providing computer time in
CY2014.

For CY2014 USQCD was awarded 240 M BG/Q core-hours on the BG/Q at
Argonne, and 100 M XK7 core-hours on the Cray XK7 at Oak Ridge. For
CY2015 these were reduced to 180 M BG/Q core-hours and 100 M XK7
core-hours and we anticipate receiving a similar, reduced allocation
in CY2016. 

Beginning January 1, 2015 those projects awarded Incite time during the
2014 allocations process should have begun to use their Incite
allocations as rapidly as possible in order to allow USQCD use of zero
priority time to begin as quickly as possible. We expect this time to
be consumed quickly - in the first quarter of the year. Thus, there is no
regular INCITE time at ANL available later in the year. The Incite time
that will be allocated as a result of this Call for Proposals”, will
begin Jan 1 2016 and should be consumed in the first 3-4 months of 2016.

In addition we expect to receive in CY2015 zero-priority time on the BG/Q
at Argonne.  Based on previous usage and availability, we will
distribute zero-priority time starting in 2015 as soon as our INCITE
allocation has been consumed and zero-priority time becomes available,
according to the allocations made during last year’s, 2014 allocation
process.  This is expected to begin in April 2014.  These 2014
allocations for 2015 zero-priority time will complete June 30, 2015.  
As part of the current, 2015 allocations process, driven by this Call
for Proposals, we will allocate the remainder of CY2015 ANL zero-priority
time as well as that which becomes available to USQCD in the first half
of 2016.   As the total amount of time cannot be reliably estimated, we
will allocate percentages of zero-priority usage. The SPC may readjust
these percentage allocations based upon observed usage. The Oak
Ridge facility does not provide a zero-priority queue.

The usage of the INCITE allocations should be monitored by all PIs
ofINCITE projects on the USQCD WEB-page:
        http://www.mcs.anl.gov/~osborn/usqcd-spc/2013-14-mira.html

v) USQCD computing resources.

The Scientific Program Committee will allocate 7200 hours/year to Type A
and Type B proposals.  Of the 8766 hours in an average year
the facilities are supposed to provide 8000 hours of uptime.  We
then reserve 400 hours (i.e., 5%) for each host laboratory's own use,
and another 400 hours for Type C proposals and contingencies.

==================================
At BNL:

60% of a 1024 node BG/Q rack
  16 cores/node, up to 4 threads per core
  16 GB memory/node
  10% of a BNL rack with time donated to USQCD.
  50% of a rack owned by USQCD.
     total: 7200*1024*16*0.60 = 70.8 M BG/Q core-hours = 116 M
Jpsi-equivalent core-hours

There is no tape storage at BNL for USQCD activities.  We have 100+
TBytes of disk space, which should be ample for users to stage their
calculation to/from BNL,  but long term storage on tape will continue to
be done at FNAL and JLAB.

==================================
At FNAL:

400 node cluster ("Ds")
 Eight-core, dual-socket 2.0 GHz AMD Opteron (Magny-Cours) nodes
 32 cores per node
 64 GB memory/node
 1 Ds core-hour =  1.33 Jpsi-equivalent core-hour
     total: 7200*400*32*1.33 = 122.6 M Jpsi-equivalent core-hours

224 node cluster ("Bc")
 Eight-core, quad-socket 2.8 GHz AMD Opteron (Abu Dhabi) nodes
 32 cores per node
 64 GB memory/node
 1 core-hour = 1.48 JPsi-equivalent core-hours
     total: 7200*224*32*1.48 = 76.4 M Jpsi-equivalent core-hours

64 node GPU cluster ("Dsg")
 Quad-core, dual-socket Intel E5630 nodes
 48 GB memory/node
 2 GPUs NVIDIA M2050 (Fermi Tesla) per node, GPU rating 1.1
 (128 total GPUs available)
 GPU memory (ECC on) 2.7 GB / GPU
 Each M2050 gpu-hr is equivalent to 1.1 Fermi-gpu-hr
     total: 7200*128*1.1 = 1014 K GPU-hours

314 node cluster ("Pi0")
 Eight-core, dual-socket 2.6 GHz Intel Xeon (Ivy Bridge) nodes
 16 cores per node
 128 GB memory/node
 1 core-hour = 3.14 JPsi-equivalent core-hours
     total: 7200*314*16*3.14 = 113.6 M Jpsi-equivalent core-hours

32 node cluster ("Pi0g")
 Eight-core, dual-socket 2.6 GHz Intel Xeon (Ivy Bridge) nodes
 128 GB memory/node
 4 GPUs NVIDIA K40m (Kepler Tesla) per node, GPU rating 2.6
 (128 total GPUs available)
 GPU memory (ECC on) 11.5 GB/GPU
 Each K40 gpu-hr is equivalent to 2.6 Fermi-gpu-hr
     total = 7200*128*2.6 = 2396 K GPU-hours)

These clusters will share about 1000 TBytes of disk space in Lustre file
systems. Tape access is also available.

For further information see http://www.usqcd.org/fnal/

==================================
At JLAB:

320 node cluster ("9q")
  Quad-core, dual-processor  2.4 GHz Intel Nehalem nodes
  8 cores per node
  24 GB memory/node, QDR IB fabric in partitions of up to 128 nodes
  1 9q core-hour  =  2.2 Jpsi-equivalent core-hours
      total: 7200*320*8*2.2 =   40.5 M  Jpsi-equivalent core-hours

192 node cluster ("10q")
  Quad-core, dual-processor  2.53 GHz Intel Westmere nodes
  8 cores per node
  24 GB memory/node, QDR IB fabric in partitions of 32 nodes
  1 10q core-hour =  2.3 Jpsi-equivalent core-hours
      total: 7200*192*8*2.3 =   25.4 M  Jpsi-equivalent core-hours

276 node cluster ("12s")
  Eight-core, dual processor Intel Sandy Bridge nodes
  16 cores per node
  32 GB memory/node
  QDR network card, with full bi-sectional bandwidth network fabric
  1 12s core-hour = 2.3 Jpsi cores
      total: 7200*276*16*2.3 =  73.1 M  Jpsi-equivalent core hours 

147 node GPU cluster at JLab  ("9g", "10g", "12k")
  32 nodes with 4 NVIDIA C2050/M2050 (Fermi Tesla) GPUs = 128
  18 nodes with 4 GTX-480 (Fermi gamer) GPUs (1 GTX-480 = 1.55 C2050) =
111.6
  23 nodes with 4 GTX-580 (Fermi gamer) GPUs (1 GTX-580 = 1.70 C2050) =
156.4
  32 nodes with 2 dual GTX-690 (Fermi gamer) GPUs (1 GTX-690 = 2 @ 1.80
C2050) =  230.4 
  42 nodes with 4 NVIDIA K20m (Kepler Tesla) GPUs (1 K20m = 2 C2050) = 336
  (588 total GPUs available: in C2050 units -> 962 GPUs total)
      total: 7200*962 = 6.9 M GPU hours

12 node MIC cluster at JLab  ("12m")
  12 node cluster equipped with 4 Intel Xeon Phi 5110P MICs
  (48 total MICs available)
      total: 7200*48  = 345 K MIC hours

For further information see also http://lqcd.jlab.org .
Machine descriptions can be found at

https://wiki.jlab.org/cc/external/wiki/index.php/New_Users_Start_Here

At JLAB, the systems will have access to about 1 PBytes of disk space.
Tape access is also available.

==============================================================
Resource estimates

Based upon the performance conversions used above, the total
resources available in this call are shown below. The INCITE time is
based on an estimated allocation for CY2015 and does not include time
from CY2015.

BG/Q (BNL):              116 M Jpsi ->  16 TF-yr
Clusters (FNAL):         312 M Jpsi ->  43 TF-yr
Clusters (JLab):         139 M Jpsi ->  19 TF-yr
Fermi GPUs (FNAL):        75 M Jpsi ->  10 TF-yr
Tesla GPUs (FNAL):       283 M Jpsi ->  39 TF-yr
Fermi GPUs (JLab):       566 M Jpsi ->  79 TF-yr
Intel MIC (JLab):         56 M Jpsi ->   8 TF-yr
Total (USQCD):          1547 M Jpsi -> 216 TF-yr


OLCF (GPU):              100 M Jpsi ->  14 TF-yr
ALCF:                    293 M Jpsi ->  41 TF-yr
Total (INCITE):          393 M Jpsi ->  55 TF-yr

--------------------------------------------------------------------------------

Anna Hasenfratz                               |  Professor of Physics
Phone: 303-492-6972,                          |  Fax: 303-492-5119
University of Colorado, Boulder 80309-390  

Back to Top


Andreas Kronfeld Legal Notices