[Hardware] Notes... The case for an open client

Elektron elektron_rc5 at yahoo.ca
Mon Aug 16 04:09:43 EDT 2004


On 16 Aug, 2004, at 05:53, jbass at dmsd.com wrote:

>> Take up the challenge, or your arguments are just a lot of talk.
>
> hmmm ... what challenge? Your diversion from my interest to search the
> space 100% non-sequentially just because I have before? You still 
> haven't
> proven that the key was generated randomly, and just assume that the
> deterministic pseudo-random number generator RSA used doesn't have any
> n-order artifacts that might interact with the key generation algorithm
> in a way that searching by run length might yeild an advantage over
> sequential?

How can I prove something about an algorithm I don't know about?

I'm not sure if you can prove that a given hash is 'strong'. As far as 
I know, it's only possible to prove that the hash is weak.

> You completely ignored the challenge to prove that a non-sequential 
> search
> was any worse mathmatically ... or that the deterministic pseudo-random
> algorithm RSA used was perfectly random and not flawed.

Generating the non-sequential search order has an overhead that you 
cannot avoid. Nor do we search purely sequentially.

No function is 'perfectly random'. But it's probably more random than 
you can compensate for. RC5 should be able to be used as a decent 
one-way hash, anyway (which, them being cryptology experts, they 
probably would have done).

> I admit my choice is strictly personal preference based on intuition 
> that
> you do not accept ... so stop being god here, demanding that I conform 
> to
> your highly structured point of view.

"Some numbers are more likely because I think they are." Great proposal 
for the project. Now, write the code which generates the search order 
without massive overhead.

> If as you say the key generation is in effect a purely random key, then
> it doesn't matter in what order the key space is searched. In fact, if
> I am to search independently of d.net, there is a darn good reason to
> use a different search algorithm so that I don't spend the next 10 
> years
> operating in d.net hidden wake.

Not really. D.net is currently working on the CA:00000000:00000000 
block, I believe (I don't run RC5 much so I wouldn't know exactly). You 
can see by running the client, and downloading a RC5 block. How can it 
be 'hidden' if they tell you exactly which block is being searched?

If you can find an algorithm which has little overhead, then sure. But 
unless it can avoid the keyspace being searched by d.net (by, say, 
checking if it's in the CA block), it's not really worth the trouble 
either.

A sequential search from a random position is just as good as a purely 
random search. The problem is that you need a 2^72-bit bitfield if you 
want to do a purely random search. You can also pick a number prime to 
2^72 and increment by that amount, and you'll eventually search all the 
keyspace. The thing is, searching 'nearby' keys is much better, since 
L[0] and L[1] don't have to change, so you can save it from the last 
round instead of recalculating.

If you still want to look at run lengths, I think you're just asking 
for a huge overhead. I'm pretty sure that the fastest possible 
algorithm on a PowerPC (using cntlzw) takes, on average, four clocks 
per run (I'm running it in my head so I may be wrong). Since the 
average run length is around 2, there will be around 36 runs for a 
72-bit integer. That's 144 cycles. That's almost half the time it takes 
to process a key (on an antique 603e), and more than the time it takes 
to process a key on a G4.

You asked "why waste 30 years searching an unlikely keyspace?", but 
until we know which is more or less likely, there's not much point 
trying to search in a certain area of the keyspace.

> Or, because d.net's initial random choice and search algorithm was very
> unlucky, they are doomed to not finding the key until the very last
> block is searched.

I think they're chosen semi-randomly, so it'll have to be extremely 
unlucky.

> I might also be equally unlucky and choose an equally poor starting 
> point
> and search sequence, and we both are doomed to finding the key as the
> very last block searched.
>
> That, happens to be much more unlikely, with each of us searching with
> independent starting choices and search algorithms ... and one of us
> getting lucky and stumbling on the key a decade or two before the other
> would.

Searching the keyspace with two algorithms is no better than sticking 
to one (Proof: you have f(x) and g(x), two algorithms to decide on keys 
to search. Define j(x) = f((x+1)/2) for odd x, j(x) = g(x/2) for even 
x. Now we haven one algorithm that is equivalent two trying two 
algorithms at a time).

Unless RSA labs knows the algorithms you're using, bothering with 
another one just takes more cycles to calculate.

What "starting choice"?

> Now ... what I proposed was a method of sharing searched key spaces and
> algorithms, along with an indication of how much of the space was 
> searched.
> For the first few decades it will not make a difference as the overlap 
> can
> simply be considered second checking the others work. It also means 
> that
> each team can proceed without having to recheck, as the other will over
> time if the key isn't found.
>
> We both independently have a 50/50 chance of finding the key in the 
> first
> half the key space.

So you have a 75% chance of finding the key, but together, you do 100% 
of the work. Why not just search 100% of the keyspace?

> Personally ... trusting any PC which operates in a non-temp, non-power
> conditioned environment is senseless as those machines only crash about
> a fraction of a percent of the time when they are corrupted by 
> environmental
> problems ... and if you look around you, they crash frequently. You 
> don't
> run mission critical applications on machines where you can not trust 
> the
> results with a high degree of certainty.

If we restricted d.net to only those who underclocked, had huge UPSes 
and massive air-conditioning units, the computing power would be so 
much less that it's not funny.

> d.net is just as likely to get to the end of the keyspace because some
> corrupted machine missed the key, but returned verified partial keys in
> the same block.
>
> Since you seem to demand that I accept your perfect view of how I 
> should
> accept d.net, consider I don't see d.net as a perfect solution either.

I never said you should accept d.net. You suggested a change to d.net 
which is ridiculous, would mess up the structure of d.net already 
(searching only parts of a block? How do you get stats for that?), and 
suggested that it'd find the key faster, which you have no evidence of. 
If you can find a not-too-many-clock-cycles way of generating such a 
search order, feel free.

You quote "probabilistic number theory". I don't think there's such a 
thing (since, of course, number theory is mostly about positive 
integers). You suggest, without any reasonable amount of evidence, that 
some keys are much less likely than other keys, and therefore should be 
searched last, and somehow, you think you know which ones they are.

With no evidence, my suggesting that the key is completely random is 
reasonable. Your suggestion that the algorithm (which you know nothing 
about) favours certain run lengths is dubious at best. When you know 
something about the algorithm, then it may be worthwhile. A paper has 
identified weaknesses in MD5, yet it still has not been broken. How do 
you expect to identify weaknesses in a black-box function whose input 
you know nothing about? Cracking MD5 is comparatively easy, since you 
know exactly how the algorithm works.

I have challenged you to identify a weakness in a random function you 
know everything about. You seem to be ignoring it.

And you have not yet found a way to calculate your search order in a 
reasonable number of cycles. 144 cycles is far too many. Write the 
code, and someone may implement it. That is the d.net way.

- Purr



More information about the Hardware mailing list