[rc5] Random keyblocks

Richard Freeman rfreeman at netaxs.com
Tue Aug 26 14:09:56 EDT 1997


On Tue, 26 Aug 1997, Eric Gindrup wrote:

> 
>         Perhaps better would be to hash the "block-space" so that the 
>      hashed sequence of blocks would be random enough to wander all over 
>      the keyspace.  Then percentages completed could be reported with 
>      incoming blocks to the client.  So a random block could be generated 
>      from the last x% of the hashed keyspace where x is the unchecked 
>      percentage.
>         There would always be some marginal percentage that a random block 
>      was wasted, but that percentage could be controlled by limiting the 
>      interval from which the proxies are assigning blocks under the 
>      assumption that the percentage complete indicator is updated 
>      occasionally at the client.

Uh - I don't see any advantage in hashing the key order - it will only
make coordinating with other efforts more difficult (as in Cyberian
telling us "How about we swap info as to what has already been done so we
don't waste time...")...  It certainly won't make coordinating the
keyspace any easier.

>      
>         Also, clients aren't expected to connect to get tardy blocks.  
>      Tardy blocks are expected to be assigned infrequently with a regular 
>      block.  So, if that client ever can't connect to get more blocks, it 
>      runs through its tardy blocks (or, equivalently, "highly useful 
>      'random' blocks") and then generates random blocks until it can report 
>      all completed work to a proxy.

Instead of buffering tardy blocks, why just not add the same number of
extra buffers to the main cracking engine?  It is just as easy to download
a "guaranteed-untested" block as a tardy one.  Plus, when the person who
left his computer off for a month-long vacation comes back, he won't be
wasting CPU time processing his now-worthless buffers.

>         The overhead of transmitting or storing the occasional tardy block 
>      is very low.  Furthermore, if one were concerned that a fast client 
>      with a reliable connection were to collect too many tardy blocks, then 
>      perhaps the client should have (yet another) option to limit the 
>      number of tardy blocks that it will collect.  Either it can work on 
>      them when the tardy limit is reached, or it can refuse to accept new 
>      tardy blocks.  A refused tardy block would just become tardy again and 
>      be reassigned to someone else.
>         The idea is that tardy blocks are assigned when the connection to 
>      the proxy is working and are only worked on when that connection is 
>      bad.  This increases the utility of the offline work over that of 
>      purely random block generation.
>      

Again, this is the whole reason that we buffer keys in the first place -
so that there is something to work on when the connection is bad.  The
only reason that you would generate random-keys at all is that you cannot
make the buffer infinately large (for obvious reasons)...  Instead of
having clients keep track of 200 active keys and 50 tardy ones, it is at
least as efficient and possibly more efficient to have it store 250 active
ones...  This number can be set to any reasonable value, and is a lot
easier to change in the programming than adding some secondary method of
key-distribution...

>         Finally, the only reason I can see for supporting purely random 
>      block generation is concern that clients are improperly reporting 
>      completion of blocks.  Perhaps a rogue client or some such is 
>      misreporting block completions.  The random blocks would thus serve as 
>      a check that reported blocks actually don't contain the desired key.  
>      Simplicity of coding is also an issue in random block generation, but 
>      the simplest design is just to exit.

You have to admit that random keys get more work done than the "just exit"
alternative.  Plus, when the connection resumes, the "just exit"
alternative will be very costly...  Even if you were to handle tardy keys
- imagine a prolonged period of bad connection - eventually you have to
run out of tardy keys - either you waste time or generate random keys.

Is this argument really worth worrying about anyway?  Does anyone simply
leave their computer running for days on end crunching random keys?  If
so, I think the best idea is simply to increase the buffer-size (which has
to be an elementary process)...

-----------------------------------------------------------------
Richard T. Freeman <rfreeman at netaxs.com> - finger for pgp key
3D CB AF BD FF E8 0B 10 4E 09 27 00 8D 27 E1 93 
http://www.netaxs.com/~rfreeman - ftp.netaxs.com/people/rfreeman

----
To unsubscribe, send email to majordomo at llamas.net with 'unsubscribe rc5' in the body.



More information about the rc5 mailing list