[RC5] [RC5-Mac] the DEATH of d.net?
gindrup at okway.okstate.edu
gindrup at okway.okstate.edu
Mon Jul 27 16:39:53 EDT 1998
There are several reasons to continue the D.Net project roughly as
First, an unworthy barb.
Sun has for the past several years had a spate of announcing
"revolutionary" stuff that although useful hasn't changed the face
of the corporate desktop. Jini will have a very real problem shared
by Java -- the management sees that independent capable machines are
able to run and produce useful work when the network is down. NCs
and thin Jini clients can't do this.
Now, the useful arguments.
RC5-56 was originally chosen by D.Net because it was the smallest
untried challenge problem and because DESCHALL was ending. D.Net
got its first large influx of participation from the participants in
the DESCHALL effort. (Check out
for a visual demonstration of this fact.) When RC5-56 ended last
October, v3 wasn't done. (It was promised for around Christmas, but
we won't go there.)
In order to simplify the creation of the proxy network and the
clients, it was decided to migrate the effort to RC5-64. The
proxies barely changed since they were already handing around 64-bit
values (ignoring the high-order 8 bits). RC5-64 was chosen so that
the existing user base could be maintained unil v3 was done.
Then, two months later, just before Christmas 1997, RSA announced
the DES-II challenges. All efforts went to dual cores and (AFAIK)
v3 design stopped. Between DES-II-1 and DES-II-2, this didn't
appear to change.
However, this is not to say that RC5-64 was chosen for any of the
reasons you mentioned. RC5-64 was chosen so as to retain a presence
in the minds of most of tha participants in RC5-56 until the next
"high visibility" contest or until v3. The DES-II contests have
been the intended high visibility contests.
So, has there been a benefit from maintaining this presence on the
clients? Yes. D.Net is vastly outperforming Moore's Law. You
state a version of this law, but don't do the obvious comparison.
D.Net, after any 6 month period, has 2.6 times the cracking power
(as before the period). Moore's Law gives transistor density
doubling (at constant cost) every 18 months. Recent examinations of
this "Law" have shown that, for "normal computing', computer
*systems* are doubling in utility (at constant cost) every nine
months. Not all the speed-up is in the processor and chipset.
Still, in 18 months D.Net will be cracking 17.6 times as many kps
as it is now. Deep Crack will only double. And this is all for the
same costs -- the widely distributed cost of maintaining the D.net
client versus US$250,000.
Further, the exponential growth of the D.Net keyrate is borne out
by repeated "time to complete extrapolations". The exponential
estimate is very stable and shrinking at around 1 month/month.
You suggest abandoning D.Net and joining another effort. Let's
consider what that will do to distributed computing -- there will be
no standard client interface so each distributed client you run will
be different. Several of the distributed efforts out there have
mentioned designing a D.Net client if v3 ever actually pops into
existence. I tend to think that this level of standardization would
be a good thing. Continuing to support D.Net is at the same time
continuing to support the development of a distributed standard.
D.Net originally had tons of independent motives for joining the
- thumbing your nose at (silly) government policy
- proving that RC5-56 (and 40-bit encryption) was inadequate
- working out the "nuts and bolts" of distributed computing
- getting great stats
- developing a social issue to help geeks have something to discuss
- fighting bloatware
- being a part of something larger
- trying to win a few thousand dollars
- helping a charity
- utilizing otherwise wasted idle time
Are all the reasons currently obviated? I don't think so. And now,
there are more reasons
- developing a distributed computing standard
- shaming A. Beberg to actually finish v3 or at least widen it to
some people who would help get it done
- vastly outperforming Moore's Law
- bringing other distributed efforts into v3 so that all the efforts
can benefit from the D.Net supercomputer
- achieving centralized critical mass -- where a single key/stats
server just can't cope
- proving that no *specific* keylength is adequate for legislated
- maintaining interest in the still-running encryption debates in
None of these reasons is currently pointless or redundant.
You mention a few applications to be run on Jini nd I had to
chuckle a little. One of the reasons that encryption is well-suited
to *any* distributed computing environment is that the amount of
inter-process communication is low. Some of your examples though
will *never* be distributed. Multimedia is not distributable on
anything less than Gigabit Ethernet. Sure other high-speed media
will come along, but current connects will never successfully
compete. The successful examples you cite already point to areas
where distributed computing is feasible (ever) -- jobs where the
amount of processing to do requires vastly more time than that to
communicate the problem to the processor. This is where multimedia,
spreadsheets, and modelling will never enter the distributed realm
-- their processing is trivial compared to their cast communications
requirements. Unless the network runs at system bus speeds
distributing the application will be a loss, not a win (thus,
You say that the speed of D.Net is being challenged by Deep
Crack. Just because the two power curves intersect currently does
not mean that Deep Crack can keep up. If Deep Crack is to keep up
with D.Net, following Moore's Law (since only transistor desities
are relevant to Deep Crack), the EFF will have to fork out
US$400,000 for the next machine (architectural scaling problems will
require enough engineering to make something that keeps up with
D.Net a constant engineering challenge). The next machine after
that will cost $US630,000. This is a year from now, when RC5-64 is
expected to be over. What makes you think *anyone* will throw money
at this problem fast enough to keep up with D.Net. Unless the EFF
hardware becomes considerably more scalable, it will cost a *bundle*
to make a new version to beat us for each contest.
Deep Crack was not faster than D.Net. The D.Net peak speed had
not been reached, but at the end, was matching the average rate of
Deep Crack. Further, if a smarter algorithm is used by D.Net, the
effective keyrate of D.Net will be much higher. Remember also that
D.Net is using brute force while Deep Crack is being mildly more
subtle. Don't think that this minor spped-up is unavailable to us
Jini does not exist. Pinning hope on this vaporware (v3) or that
vaporware (Jini) does nothing but set one up for disappointment.
D.Net is practical *right now*. You can do it *right now*. Jini is
a neat idea that may never make it into the real world.
Further, there are *many* people who need encryption to do their
daily business. RSA is an obvious candidate. Banks are obvious
candidates. These people need tha abilith to encrypt to do their
work. They need good encryption and they need strong encryption.
If running the D.Net client on their machine can help them obtain
better encryption, then there is a good economic motive to do so.
Next, you mention some other distributed contests to join in lieu
of D.Net. I have something to say about almost all of them. Almost
all of the other projects have open ends. You point at the one or
two year completion time for RC5-64, but what does the completion
time look for at the competition?
SETI hasn't started. But, then again, when it does, how long
will it run. People have complained about the 2.1 year max time to
complete RC5-64. What makes anyone think there will be any results
from SETI in two years even if *all* the computer hardware on the
planet were working on it? What if there's nothing to find?
Mersenne Primes (GIMPS) is even more open-ended. It's *more*
esoteric than cracking encryption for money. And the minimal
hardware requirements are (or at least were) rather steep.
Golomb Rulers required manual fetching and flushing and had just
started automatic internet fetching and flushing when I gave up
(because they couldn't keep keys in their out bin). OGR is
open-ended because they can just keep getting bigger.
NFSNet and ECMNet both have large initial hardware requirements.
Both can run until the Universe has died its heat death.
Finally, you mention the waste of running the client. What waste?
The *whole idea* is that this is time that would otherwise be
wasted. DCTI is not asking anyone to leave their machines on any
more than usual. They aren't asking people to do less work on their
machines so that the client can get more cycles. They are asking
people to download a client that will utilize their otherwise idle
Any more, the only machines that reduce power consumption to the
processor are notebooks and *I* don't think it's all that wise to
run the client on such a machine. The machine is n't designed to do
that sort of continuous work.
But anyway. There is no more wasted electricity, unless you
choose to produce more, than if you were not running the client.
Trying to bring waste or this kind of economics into the picture
will fall flat immediately. If you *really* want to save
electricity, raise your and your employer's thermostat 5 degrees (F
~= 3.5 degrees C). Turn off your printers over the weekends but
*not* overnight. Turn off your monitor during lunch and at night.
If you're a real IS/IT person, turn off your overhead light.
The power draw to maintain the client is trivial compared to the
usual waste of people.
If we are trying to establish credentials based on our
I run Team Ivory Tower, #637. We did a few thousand blocks in
RC5-56. Our team rank has never been lower than our team number
(except for a day or two after we were created).
I'm ranked about 1000th in RC5-64. I've done around 160,000 blocks.
I was working on RC5-56 when the estimated time to completion was
I've worked on NSFNet, OGR(-21), and GIMPS. I know what I'm
talking about when I say it is *very nice* to know that there *will*
be an end to the current project and series of projects.
It would take D.Net less than 22 minutes to right now recreate
all the work I've ever done for them.
So yes. My contribution is insignificant. Your contribution is
insignificant. My team's contribution is insignificant. Your
team's contribution is, er, well, not insignificant, but it could be
reproduced in less than a week. :-)
The whole point of distributing the effort is to collect all
these little efforta and coordinate them into a significant total
effort. You aren't going to overcome this intrinsic property of
distributed efforts by switching to a vapor-Jini or going to another
effort (whether open-ended or not).
Remember, your contribution is *supposed* to be insignificant.
It isn't supposed to be "work". It isn't supposed to require a
change in habts. It's a way to do something (well, very little)
useful with resources that would otherwise be totally wasted.
Anyone can join, not just those with high-end machines. And, there
is a definite and known termination point. When all the keys are
gone, that's it.
-- Eric Gindrup ! gindrup at Okway.okstate.edu
______________________________ Reply Separator _________________________________
Subject: [RC5] [RC5-Mac] the DEATH of d.net?
Author: <rc5 at lists.distributed.net> at SMTP
Date: 7/21/98 10:14 AM
At 3:48 AM -0500 on 7/21/98, Patrick T Kent wrote [on rc5mac]:
> It seems to me that at the moment we are either wasting time trying to
> crack a code for which a super computer now exists for that sole purpose
> and can achieve the results in a faster time than we can do it. Or we
> are working on a project that again seems a complete waste of time in
> that it is unlikely we will ever complete it before beginning a more
> important (HOPEFULLY!!) project. And even if we continue in that, isn't
> it likely the same super computer can do a better job?
Indeed! This question has been bugging me for a little while now. It seems
to me that in the founding days of d.net, the rationale for choosing this
project was that it required resources of a supercomputer-scale nature, and
that it was to show that supercomputer-scale work could be done by those
without supercomputer-scale funds. Well, that was a year or two ago, and now
it seems we may have been overtaken by Moore's Law -- which not only
stipulates the doubling of processor power, but doubling _at the same
price_. Not only more powerful, but cheaper too. When one thinks about
scale, one can see that to upgrade all of distributed.net's processors would
take millions of dollars. It also means that in 18 months or so, Deep Crack
will either be twice as fast, or cost only $125,000. True, it is a hardware
solution that can't be used for RC5 in its present form, but since it is a
public spec (as long as you shell out the dough for their book), Deep Crack
Clones are sure to follow, and possibly hardware designed to crack RC5. If
so, distributed.net doesn't stand a chance against its branch-guessing
algorithms. Likewise, the ramp-up latency problem that allowed Deep Crack to
jump out to such a commanding lead in DES-II-2 will still exist for DES-II-3
and all further projects, even if v3 clients can manage to diminish that
time through intelligent scheduling. Deep Crack will never have this
problem, and though by the time DES-II-3 rolls around we may have, as Adam
has said, 2.6 times as much processing power, if Deep Crack (or "Deep Crack
II: Crack Deeper") gets the same sort of lead in the first day, we will
never beat it.
The other motivating idea behind distributed.net was to promote the
potential of distributed computing, that is, the sharing of processing
power over a network. Not too long ago, Sun introduced its Jini
spec/project/thingy which promises to be the actual realization of this
idea as a practical venture. One reason why code-cracking was chosen to
demonstrate d.net power was that it can be coded and maintained with little
effort (relative to creating commercially viable apps in a business cycle
framework -- no offense to the hard-working folks of d.net). Nobody needs a
code cracking client to do their daily work, but it can be built and run
relatively trouble-free by a handful of motivated volunteers. With the
advent of Jini, however, all sorts of apps will be distributed --
spreadsheets, e-mail, graphics, modeling, database, multimedia, you name
it... but probably not code cracking, because nobody needs it to get their
daily work done. This puts a two-pronged relevancy challenge to d.net: on
the one hand, our point has been proven, and on the other hand, we produce
little of value by our efforts.
Thus, it seems to me that the entire distributed.net project is in danger
of disappearing, and is perched on the horns of this dilemma -- our claim
of "speed through sharing" is being challenged on the speed end by Deep
Crack and on the sharing end by Jini. With these twin challenges, I fear
that d.net will find it harder and harder to gain new recruits, and easier
to lose current participants. I think that perhaps distributed.net needs to
rethink things on a top level, that is, instead of spending our efforts
making faster, more efficient clients to do the same work, we should be
looking for more valuable work. Think of this: the most optimistic
estimates of our RC5-64 project are measured in years. While it's true that
the winning key could be found today, not many of us expect this to happen.
If it does in fact take years to dig out the key, how much satisfaction
will you have derived from it? As Patrick has wisely pointed out, how much
has it cost us in resources -- electricity being the major "waste" -- to
find the winning key? I suppose if I were Adam Beberg, I might see things
differently, but I'm not, and despite the charities, the prize money, and
my own feelings about government encryption policies, I'm seriously
rethinking my commitment to distributed.net.
On a personal note -- my individual stats for RC5-64 have touched the
2100th rank, and on 7/21 my contributions amounted to 93,989 blocks of
keys. I check my stats nearly every day to see if I have gone up a notch or
two. I am a member of Team Evangelist, and it does give me some
satisfaction to see us at the top of the mountain. All the same, I don't
feel like my efforts, my contributions, are doing the world much good.
Unless this feeling changes, I will probably withdraw myself and my
machines from the project soon.
[PS -- I am posting this to both rc5mac and rc5. Anyone else who would like
to forward this to other d.net lists I am not on may do so with my
Indiana University Press Journals
To unsubscribe, send 'unsubscribe rc5' to majordomo at lists.distributed.net
rc5-digest subscribers replace rc5 with rc5-digest
To unsubscribe, send 'unsubscribe rc5' to majordomo at lists.distributed.net
rc5-digest subscribers replace rc5 with rc5-digest
More information about the rc5