Why not use smaller packets? Was: RE: [RC5] client versions - AIX
rjb-dis at iafrica.com
Thu Oct 3 09:41:53 EDT 2002
Well, if it is that easy, Mr Beglinger,I am sure Distributed.net would love
some help on this issue. Their main task is to bring out RC5-72 within the
next two weeks. The changes you are suggesting would take quite a while to
put into the system. Why have multi-level processing of blocks, when the
keymaster can do this daily for us anyway. I would rather have my personal
proxy to just ask for blocks and serve clients, than have to do some
processing as well (that means less cycles to the client :) )
At the moment, the system works pretty well without the changes you are
suggesting. I would say that most of the participants are happy with it and
the block size is not an issue. I would rather have RC5-72 running sooner
than later (my room is very cold) and major changes like this aren't
----- Original Message -----
From: "Jack Beglinger" <jackb at guppy.us>
To: <rc5 at lists.distributed.net>
Sent: Thursday, October 03, 2002 5:46 AM
Subject: Why not use smaller packets? Was: RE: [RC5] client versions - AIX
> > You may be disappointed with RC5-72. Based on the success of the longer
> > workunits in the OGR projects, we are scaling the RC5-72 workunits to a
> > larger size. It is very possible that you may get RC5-72 workunits that
> > take much longer than they did with RC5-64.
> Great - DNET use larger blocks!! But why have the personal proxy have to
> hand the larger ones too? There is no need, if you make it a super
> > Changing the minimum from 2^28 to 2^32 means that the smallest workunit
> > under RC5-72 will take 16 times longer to process than the smallest
> > workunit for RC5-64. A slow machine that used to do a 2^28 in two hours
> > will now take 32 hours to finish a workunit for RC5-72. This workunit
> > would also earn 16 times as much stats credit, just as if it had done 16
> > workunits of 2^28.
> You are great with math... but lets see - you will give us create at the
> reference point but refuse to give out the block in smaller than 2^32....
> seams you are wasting a lot of processing space and time for work you will
> not be supporting.... Not a good use of resources, unless you are willing
> allow for sub sections to be processed in units that match the 2^28 size.
> > A workunit of 2^36 would take 256 times longer, and 2^40 (if we offer
> > them) would take 4096 times longer. With proper checkpointing in place,
> > this is no problem. I plan on using the largest workunit available as
> Check pointing IS sub blocks. YOu are already setup the clients to handle
> smaller work points -- use the similar or even same logic in both client
> personal proxy to handle smaller units.
> Oh, you are worried that smaller unit will come back to DNET. If you read
> my first note... the personal is responsible to return only whole
> blocks. If a part is missing... it would get a client to reprocess it and
> block back. Just like a client that is working on a block and gets
> check point is used to help it get back on track and return a completed
> > The fullproxies do receive larger "superblocks". I'm not sure of the
> > size, but they are significantly larger. Perproxies don't receive
> > superblocks because (a) superblocks are only available directly from the
> > keymaster, and (b) there is too much potential for abuse.
> You have the same abuse now. The way you filter is watch for high return
> rates, and check out what is happening. The same would go for this, it is
> > The problem is, the smaller we allow the workunit to be subdivided, the
> > more details we need to track at the keymaster. It's not a safe
> > assumption that work handed out by a perproxy will be returned to the
> > same proxy to be recombined. Even if they do come back to the same
> > place, there could be enough of a delay to increase the storage
> > requirement of the proxy, and to delay credit to those who turned it in
> > first. If I do half of a workunit, and the other half gets deleted from
> > someone's buff-in, I might never get credit at all!
> No it is not a problem, unless you make it into one. Subdividing work
> into a useable size for a local network and having the be processed by
> small "hands" and assembled and return to the key server in the sky... The
> key server would be no wiser. It gave out a block and returned a block -
> would it care which branch of a multiple threaded processor is working on
> packet. The question is... is it done right.
> > It's impractical to use smaller units at the clients than we track at
> > the keymaster. If a perproxy splits a 2^32 16 ways to 16 different
> > clients, then who gets stats credit for the finished 2^32?
> Again the keymaster would not be handling the small packets - that is the
> personal proxy.
> And to answer your question... Allow the proxy to either one:
> 1) Be configured return the user, machine type, os type (including mixed &
> mixed) - this way the personal proxy for small packets would be the
> to DNET. Also the only way to configure a small packet client is to have
> personal proxy.
> 2) Allow the proxy to return two types if information with a key
> The packet completion and the stats break down so multiple machines or
> users or ... will be preprocessed ready to add to the stats boxs. With
> added in would make it easier for teams to keep their own stats... because
> the personal proxies is giving them out.
> > In the final calculation, it takes just as long to do a single 2^32 as
> > it does to do 16 2^28's, and the stats credits are the same, so why make
> > a big deal of it? Most distributed projects give you no control
> > whatsoever of the size workunit you want. We've considered this option
> > too, specifically because it makes discussions like this moot.
> You are right the time to process a 2^32 is the same as processing 16
> 1) processing in parallel so the "machine" appears to crunch that 2^32
> times faster.
> 2) I am not wasting time for the CPU to control an IDE drive while check
> 3) I am using less power and creating less heat by not having excess
> circuitry running.
> > The clients already take responsibility for picking their own randoms.
> > It doesn't make sense to shift this to the perproxies, because the
> > client still needs to generate randoms if it can't reach the perproxy.
> That is right that is a backup... the client generates it own random
> Which I have found when I had a cable break 3 days... they are not that
> random for a client or for the network of machines.
> If the break is outside of firewall/personal proxy. Allow the proxy to
> random BIG BLOCKS and hand out smaller work units... again... the
> Keymaster in the sky will get only large blocks -- maybe even every large
> random blocks. The clients are not trying to guess a random block and
> having overlap because the number of machine processing random blocks
> are the same time ( and in the a very small sub space)
> > Each workunit returned must be ticked off as "completed" in the
> > keymaster. It doesn't make sense to have the proxies send back a
> > summary when we still need all the detail. We have also implemented
> > mechanisms in the past where work from certain client versions are
> > discarded at the master. If a proxy combines work from 10 computers
> > using the same ID but different versions and platforms, we lose the
> > ability to filter out the noise. This also makes the perproxies much
> > more complicated, which makes the code much harder to modify and
> > maintain.
> Try looking from my side -- it does not make since to have the keyservers
> and stats server working all the time on small things (even 2^32 is a
> thing!). Why not find away to have the DISTRIBUTED.NET "super" machine
> distribute more of the work so like maybe... Real Time stats could come to
> To filter out the "noise" close the projects to client types - like you
> projects. This why you can disable bad clients or proxies and get the
> off servers but preventing the connections.
> To solve the version issue... only allow a block being handled by a
> proxy to be worked on by a single version of the client, maybe even os and
> cpu... but I do not see that level of need. Have one of the stats of the
> personal proxy showing the number of different blocks it is having to
> so the user would get the versions n'sync.
> > We like the proxies as waypoints, merely passing work from here to there
> > without knowing much about what is inside each workunit. This makes it
> > easier for our network to support new projects without a lot of changes.
> The proxies can also be viewed as clients without a crunchers. And
> can be viewed as proxies with crunchers. You have people who have
> multiple clients sharing a common buffers. Even the SMP machines run one
> client per processor. Some work and all could be one. So on a SMP
> machine one becomes the master and others check work in/out with it.
> Please step out of the box and look around, there are more than one way to
> see the issues.
> To unsubscribe, send 'unsubscribe rc5' to majordomo at lists.distributed.net
> rc5-digest subscribers replace rc5 with rc5-digest
To unsubscribe, send 'unsubscribe rc5' to majordomo at lists.distributed.net
rc5-digest subscribers replace rc5 with rc5-digest
More information about the rc5