[RC5] LIFO? Is this true?

David Taylor David_Taylor at msn.com
Wed May 13 18:07:28 EDT 1998


>
> If you read the entire message, including the previous paragraph
> and the smiley, then the question is pretty clearly answered. I'm
> serious. I don't give a flip about dupes that I produce because
> I don't produce very many.

How are you so sure?  As far as I can tell D.NET don't give out personal
duping stats?  You might be running a buggy client, and from the rest of
your messages if your net connection/pproxy goes down it would seem the
clients would start processing ancient blocks..

> > dupes are a serious issue and will
> > effect the timeframe of RC5-64, even if it doesn't affect the eventual
> > outcome. The last time Daa posted stats on our dupe rate, I believe that
> > something like 40% of the blocks returned were dupes, mostly
> from old 2.64
> > clients.
>
> Yeah, I still haven't figured that one out. Why would old
> clients preferentially be producing dupes?
>
> > As has been mentioned in the past, dupe rate and block latency are very
> > important in timed contests such as DES-II.

Old clients have bugs.. I don't know exactly what bugs/which clients, but
many
of the older 2.64* clients have them...

> > So, once again, please do not buffer any more blocks than you need to.
>
> Don't worry, I run through a pproxy here, as I outlined in
> the previous message. I am not a serious duper and I don't
> give a flip about how old the blocks are in the client's
> buffers because our network and hardware are reliable. As
> such the blocks in the in-buffs should be old because they
> never get touched.

Well, if it is so reliable, you don't need to buffer too many blocks in the
clients in-buff, and from what I can tell, it sounds as if there would be
a greater chance that the old blocks in your client in-buff files are dups
than the randomly generated blocks would be dups.

If the blocks wont get touched, why buffer them? so the subspace can be
finnished
then re-assigned, then completed, THEN your network goes down, and the
clients start
processing an ancient block that was checked ages ago?? As I said, it sounds
like
random blocks could have a better chance of being non-dups

>
> If our net goes down for a day, the blocks will be consumed and
> then the clients will produce randoms. So for a few hours our
> network might be producing dupes. Then it's back on line and
> all get a fresh set of in-buffs. It's just not worth it Jim, to
> walk around and intrude on everyone's office, lab, or whatever
> and purge the buffers. This stuff is not supposed to be a con-
> sumer of real resources.

Inother words, if your net goes down after a while since the last
outage (say over 1-2 months), your clients will process some dups
then go onto blocks that have a CHANCE that they are not dups?

I don't think proccessing random blocks is good, but it could be a
better option that processing some 2 month old in-buff.

--
To unsubscribe, send 'unsubscribe rc5' to majordomo at lists.distributed.net
rc5-digest subscribers replace rc5 with rc5-digest



More information about the rc5 mailing list