[Hardware] Notes... The case for an open client

bovine at distributed.net bovine at distributed.net
Mon Aug 16 05:24:35 EDT 2004

> > If as you say the key generation is in effect a purely random key, 
> > then it doesn't matter in what order the key space is searched. In 
> > fact, if I am to search independently of d.net, there is a 
> > darn good 
> > reason to use a different search algorithm so that I don't 
> > spend the 
> > next 10 years operating in d.net hidden wake.
> Not really. D.net is currently working on the CA:00000000:00000000 
> block, I believe (I don't run RC5 much so I wouldn't know 
> exactly). You 
> can see by running the client, and downloading a RC5 block. 
> How can it 
> be 'hidden' if they tell you exactly which block is being searched?

A competing effort would indeed be better off by ensuring that the blocks it
searches do not overlap with ones that have already been checked by
distributed.net (since you can assume that if a solution were found by
d.net, it would have been detected and reported already).

The key numbers being displayed by the client is an obvious indicator of our
current position in the keyspace.  If one were to choose a linearly
incrementing block selection algorithm, then it's simply a matter of
starting off at a near-opposite side of the keyspace to ensure maximum
separation and minimal chance of eventual overlap.  Using a randomly
incrementing block selection algorithm, then you will have to keep track of
what ranges you would want to avoid and pick another if you land in a range
that should be excluded.

One thing to keep in mind is that with previous projects such as 56-bit DES
projects, distributed.net had to employ a bitslicing algorithm which caused
the keyspace to need to be incremented in a non-linear way, even though the
hexadecimal block numbers displayed by the clients appeared to be linear.
Similarly when the DES success indicator was sent by the correct dnetc
client, the block id that was included had to be "translated" into the
normal DES key format using proper bit ordering and parity bits.  This
aspect would have made it very difficult for a competing effort to
successfully avoid the "d.net wake" unless they were also using the same
bitslicing algorithm.  (Fortunately RC5 is not that way.)

In the final DES challenge, distributed.net cooperated with the EFF Deep
Crack dedicated-hardware implementation in order to cooperatively process
the keyspace simultaneously.  Due to the bitslicing restriction above,
special considerations had to be made in order to assign it regions of
keyspace that it could independently process without stepping on regions
that distributed.net would be sending out to dnetc clients.  As it turned
out, the smallest division of the keyspace that could be done was 1/32nd of
the keyspace (2^50 keys) since the most significant 5 bits were the only
ones that were still "linear" in the bitslicing view.  (The DES keyspace was
effectively 2^55, not 2^56, due to an optimization that both d.net and Deep
Crack employed which allowed the complement key to be checked

I rigged up a special "client" library for EFF to hook up to the Deep Crack
driver software, and set up a special network socket interface to the
keymaster to allow those 32 "macrospaces" to be requested and processed by
it.  The keymaster was simultaneously sending out the small 2^28 blocks to
the normal dnetc clients, using blocks from "macrospaces" that had not yet
been assigned to Deep Crack.

So as you can see, there are definitely opportunities for cooperation
between efforts, even if it is simply to act as a registrar and allow bulk
partitioning of keyspace to avoid collisions.

> Searching the keyspace with two algorithms is no better than sticking 
> to one (Proof: you have f(x) and g(x), two algorithms to 
> decide on keys 
> to search. Define j(x) = f((x+1)/2) for odd x, j(x) = g(x/2) for even 
> x. Now we haven one algorithm that is equivalent two trying two 
> algorithms at a time).

I agree with your conclusion.

Although not a proof, I'll also tell you about some practical benefits of
linear searching over random non-linear assignment methods:  One of the
other techniques that distributed.net used to do was to occasionally do a
"random jump" in the keyspace to another location every few hours and start
sending out any unsent blocks from that region of space until the next jump.
Since we realized that there was not really any technical benefit to doing
the jumps, we discontinued doing the random jumping.

Jumping around actually just made things less efficient on the backend since
keeping track of "fragmented" keyspace requires us to keep more disk files
around.  (In theory, if we were short on disk space we could fully "resolve"
subspaces by recycling then until they are 100% checked before starting on
additional subspaces, but in practise the price of storage allowed us to
grow when necessary for RC5-64 and save recycling until the very end.  That
probably won't be practical for RC5-72 or future OGRs.)

Also as the remaining space got increasingly fragmented, it took more and
more table searching/seeking time to try to find the next suitable location
to randomly "jump" to that had not already been distributed out.
Eliminating the random jumping also makes working with other competing
efforts friendlier, by allowing for the possibility of easily excluding the
keyspace of the other.

Now admittedly some of these storage/search limitations are due to the fact
that distributed.net has to deal with tracking blocks that are assigned but
never come back.  With a dedicated hardware solution (even using a truly
random block selection algorithm), you can presumably have a higher level of
service and can guarantee that you won't allow too many blocks to remain
"outstanding" for too long before reissuing, thus limiting the amount of
tracking storage you'd need.


More information about the Hardware mailing list