[Hardware] Notes... The case for an open client
elektron_rc5 at yahoo.ca
Sun Aug 15 09:35:03 EDT 2004
On 15 Aug, 2004, at 16:35, jbass at dmsd.com wrote:
> It's not clear to me that searching blocks with high order bits set is
> the best way to attack this problem. While I'm the farthest thing from
> a math head, certain concepts are almost difficult to escape once you
> get hooked on crypto. One strong side interest for tackling the larger
> keys is to use concepts from probabilistic number theory and
> number theory to generate the search sequence. Some very significant
> of the key space may have very very low probablities, plus several
> in key sequence generation can impact inner loop performance/work.
"Some very significant parts of the key space may have very very low
probablities" is completely unfounded. Each key has an equal
probability, which I shall prove below. Choosing a search order other
than completely random does not decrease the effort of finding the
correct key. Also, what do you mean by "several choices in key sequence
generation can impact inner loop performance/work"? Checking each key
requires a fixed amount of effort.
If each bit is random (P(bit n is set) = 1/2) and independent of other
bits (P(n|m) = 1/2), then we can multiply the probabilities together,
so P(key is x) = 1/2^72 for x in [0,2^72). That is, ALL KEYS ARE
Since you're not a math head, I'll translate: If every bit is
completely random (therefore independent of other bits), then the
chance of a bit being set is 1/2. So the chance of any key x being the
key k is 1/2^72. Please don't talk about "probabilistic number theory"
without any knowledge; this is A-level statistics, which I learned at
(The rest of this email is superfluous)
If you know half the world uses dictionary words as passwords, then go
ahead, guess it. There are 234936 words in web2 (Webster's Second
International), but 8031810176 possible 7-letter sequences (34187 times
as many). Smart people will make it not a dictionary word, and
therefore, hard to guess. There are still very many possibilities. Not
checking dictionary words requires first checking if the random string
of letters is a dictionary word in the first place. Instead, it's
better to check dictionary words FIRST, since most passwords are
dictionary words, and if it isn't, you spent a marginal amount of
Then, it's better to limit your words to a 'standard vocabulary' of,
say, 10000 words. One of my passwords is 4 dictionary words, no caps.
That's only 53 bits of entropy (assuming 10000 words), despite being
over 80 characters long.
That's your "introduction to game theory" (i.e. how to pick a strategy
when the other person has a brain).
Now, here's a mathematical question: In chess, who can be guaranteed to
not lose? (I expect this will take more computing power than will ever
be available in the galaxy, unless we figure out a way to get quantum
computing to work this out.)
> This generally suggests that checking keys with a run
> of 30 zeros or ones may be spending effort checking a very unlikely
Except the effort involved in searching keys with (max) run lengths of
30 is as negligible as (exactly equal to) the probability that it is
the right key.
> I just don't think it's
> useful to spend 30 years searching unlikely corners of the key space
Let's say it takes 100 years. There's no difference between spending 30
years to search 30% of the keyspace or spending 70 years to search 70%
of the keyspace.
> Any key allocation system should allow the researcher to ask for a
> key block. If the key sequence is not purely sequential, the researcher
> should be able to specify that the block was not 100% checked, and that
> should be noted in the database as a block that should be sequentially
> checked at some point.
This is incredibly stupid, and would mess up stats ('this guy only
checked this-many percent of the block') and would make block
verification completely impossible ('I didn't check the keys in the
block that just happened to be the positive matches').
More information about the Hardware