[RC5] Paying for blocks

Basil A. Daoust basildaoust at home.com
Wed Feb 16 22:13:37 EST 2000


Your all correct, the current projects would not be able to pay everyone 
who contributed.  But could the plan that worked on a project like say
OGR rules that got large cash grants from government to do it work?

Where ever the source of income comes from it would need to be appropriate.
Large super computers cost real money to use.  And if I can believe what I read
about a 1152 node sp frame, we are way bigger than it is.

It run some of our work back around christmas.  Check this out from
COMP.PARALLEL

We can do real work, and lots of it.  
Basil

---------------------------------------------------------------------------------

NPACI'S TERAFLOPS IBM SP ACCEPTED BY SDSC AFTER TESTING ON SCIENTIFIC
APPLICATIONS, INCLUDING THOUSAND-PROCESSOR JOBS

Contact: David Hart, SDSC, dhart at sdsc.edu, 858-534-8314
Jeffrey Gluck, IBM, jgluck at us.ibm.com, 914-766-3839

SAN DIEGO, CA - The 1,152-processor IBM RS/6000 SP system at the San
Diego Supercomputer Center (SDSC) was officially accepted by SDSC
management December 30 after successfully completing a battery of
tests that demonstrated stable operation, good performance, and high
throughput. The test results show that the new machine will provide
the capability to solve problems in days that typically require
weeks, months, or years on smaller machines.

"This is a tribute to the terrific teamwork between IBM and SDSC
staff who worked throughout the holidays to install the system," said
Sid Karin, director of SDSC and the National Partnership for Advanced
Computational Infrastructure (NPACI). "IBM's high-level attention and
commitment to problem resolution will enable us to make the machine
available to researchers tackling large scientific problems early
this year."

The IBM SP computer, installed for NPACI at SDSC, has a peak speed of
one teraflops -- a trillion floating-point operations per second --
and is the most powerful available to the U.S. academic community for
unclassified research. The machine is ranked tenth on the list of Top
500 Supercomputer Sites (http://www.top500.org/) maintained by the
University of Tennessee and the University of Mannheim.

"This is a milestone achievement in IBM's rapidly evolving
partnership with the team at San Diego Supercomputer Center," said
Michael J. Henesey worldwide sales and marketing, IBM RS6000
Scientific and Technical Computing. "The significance here is the
massive computational capability now available in an unclassified
environment. This gives the research community the tool needed for
breakthroughs in areas such climate modeling, mapping and modeling
the human brain, and genomic research. At IBM, we're proud to be
delivering on this commitment and also very focused on developing the
next-generation systems that will attempt to satisfy the scientific
community's insatiable desire for insight."

The highlight of the cooperative IBM and SDSC effort was the
discovery and resolution of a problem in the mapping of memory to
cache that led to excessive variation in program run-times. Working
together, IBM and SDSC improved the cache management for large
systems using Power3 SMP High Nodes. The operating system patch will
be included in the next release of AIX.

As part of the acceptance tests, performance of four scientific
applications on both the teraflops SP and NPACI's current production
SP was compared for various numbers of processors from 1 to 128. The
applications included AMBER for molecular dynamics; GAMESS for
quantum chemistry; and PARTREE and SCF for astrophysics. In all
cases, these applications ran 1.11 to 1.92 times faster on the
teraflops SP.

One application, SCF, was also run on all 1,152 processors and showed
good scaling over a large processor range. Using a fixed problem size
per processor, the run-time on 1,152 processors was only 1.5 times
slower than the time on two processors.

One further test demonstrated high node throughput. Over four
consecutive days in December, the system's 144 compute nodes were in
use more than 86% of the time, peaking at 95.7% usage on Christmas
Day.

Also during acceptance testing, the system was unofficially tested as
a participant in distributed.net's RC5-64 code-breaking challenge
(http://distributed.net/rc5/). While participating in force, the
machine placed among the top five daily participants for a week. (See
accompanying release.)

NPACI's IBM SP teraflops system at SDSC is the nation's most powerful
computer system dedicated to unclassified research by qualified
academic, government, and industry scientists and engineers.
Allocations of time on the new IBM SP system will be made through
national peer review, with preference given to problems that take
advantage of the machine's unique capability. See
http://www.npaci.edu/Allocations/ for more information.

The National Partnership for Advanced Computational Infrastructure
(NPACI) unites 46 universities and research institutions to build the
computational environment for tomorrow's scientific discovery. Led by
UC San Diego and the San Diego Supercomputer Center (SDSC), NPACI is
funded by the National Science Foundation's Partnerships for Advanced
Computational Infrastructure (PACI) program and receives additional
support from the State and University of California, other government
agencies, and partner institutions. The NSF PACI program also
supports the National Computational Science Alliance. For additional
information about NPACI, see http://www.npaci.edu/, or contact David
Hart at SDSC, 858-534-8314, dhart at sdsc.edu.

# # #



FOR IMMEDIATE RELEASE
January 5, 2000

SDSC'S NEW TERAFLOPS IBM SP SUPERCOMPUTER CLIMBS TO TOP RANKS OF
RC5-64 CODE BREAKING CHALLENGERS

Contact:
David Hart, SDSC, dhart at sdsc.edu, 858-534-8314

UC San Diego - During the acceptance testing of the world's tenth
fastest computer, a 1,152-processor IBM SP at the San Diego
Supercomputer Center (SDSC), the machine demonstrated its computing
power by climbing to the top ranks of participants in
distributed.net's RC5-64 code breaking challenge
(http://distributed.net/rc5/). The IBM SP held the top position in
this monumental computing task for several days during the December
holidays.

"We decided to give the processors a workout," said Jeff Makey, a
computer scientist at SDSC, a research unit of the University of
California, San Diego, and the leading-edge site for the National
Partnership for Advanced Computational Infrastructure (NPACI). "While
we were running diagnostics and benchmark programs on the teraflops
machine, many of the individual processors weren't being used. The
RC5-64 Challenge software lets us test the rest of the system's
processors at the same time."

>From December 20, when it entered the contest in force, until
December 27, when the full machine was needed for acceptance testing,
the IBM SP consistently placed among the top five participants in the
number of potential keys examined
(http://stats.distributed.net/rc5-64/psummary.php3?id=243289).
Ironically, on the machine's best day, December 24, it only placed
fifth. Four other participants chose that day to submit results they
had been accumulating for weeks or months.

Highly ranked participants are typically teams that represent the
combined efforts of many individual computer systems. Other
participants reach the top of the daily list by submitting the
results of weeks or months of computation. The teraflops IBM SP is
one of few single computers among the high-ranking participants.

The RC5-64 challenge involves testing the 2^64 possible encryption
keys -- more than 18 billion billion possible combinations -- to find
the one that properly deciphers the encoded message. Testing every
possible 64-bit key requires an enormous amount of computing power.
Tens of thousands of participants have been working on the problem
since March 1997, donating the spare CPU cycles from computers
ranging from mainframes to personal computers.

Each of the 1,152 Power3 processors in the teraflops IBM SP peaks at
736,000 RC5 keys per second, which gives an overall peak rate of 848
million keys per second. If the teraflops system could run at this
speed for 24 hours it could test more than 73 trillion keys per day.
On its best day to date, December 24, the machine tested nearly 60
trillion keys.

NPACI's IBM SP teraflops system at SDSC is the nation's most powerful
computer system dedicated to unclassified research by qualified
academic, government, and industry scientists and engineers.
Allocations of time on the new IBM SP system will be made through
national peer review, with preference given to problems that take
advantage of the machine's unique capability. See
http://www.npaci.edu/Allocations/ for more information.

The San Diego Supercomputer Center (http://www.sdsc.edu/) is a
research unit of the University of California, San Diego, and the
leading-edge site of the National Partnership for Advanced
Computational Infrastructure (http://www.npaci.edu/). SDSC is funded
by the National Science Foundation through NPACI and by other federal
agencies, the State and University of California, and private
organizations. For additional information about SDSC, NPACI, and the
IBM SP teraflops system, see http://www.sdsc.edu or contact David
Hart, dhart at sdsc.edu, 858-534-8314.

# # #


Articles to bigrigg+parallel at cs.cmu.edu (Administrative: bigrigg at cs.cmu.edu)
Archive: http://www.hensa.ac.uk/parallel/internet/usenet/comp.parallel

-- 
Join the ProcessTree Network: For-pay Internet distributed processing.
http://www.ProcessTree.com/?sponsor=4599

--
To unsubscribe, send 'unsubscribe rc5' to majordomo at lists.distributed.net
rc5-digest subscribers replace rc5 with rc5-digest



More information about the rc5 mailing list