# [RC5] d.net

Joe Zbiciak j-zbiciak1 at ti.com
Thu Jul 16 15:28:37 EDT 1998

```'GMH13 at aol.com' said previously:

| 4)  During the past week someone on this list pointed out that when you
| request a 2^31 block there are several different combinations you might
| receive, making choosing a buffering size difficult. Would it be possible to
| setup the proxies such that they have four buffers (2^28, 2^29, 2^30, and
| 2^31)?

On its face, this seems like a good idea, and it might be made to work.
There are a couple problems though.

First, there are 8 valid block sizes currently, if I understand things
correctly:

Block Size     Number of 2^28 blocks
----------     ---------------------
2^28                1
2^29                2
3*2^28                3
2^30                4
5*2^28                5
3*2^29                6
7*2^28                7
2^31                8

So, you'd actually need 8 lists.  You'd also still need the clients to
accept odd-sized blocks in larger blocks are simply unavailable.  (See
explanation below.)

| This way the proxies wouldn't need to cut blocks into odd pieces.

As long as people request blocks smaller than 2^31, they will.  I
believe all blocks initially start as 2^31 blocks (or quite possibly,
2^32 blocks, but bear with me).  The main point is that all blocks
come from a common keyspace, and that you can always make a smaller
block from a larger block.

If someone requests a single 2^28 block, the top 2^31 block will be
broken into a 2^28 block and a 7*2^28 block.  Now if someone requests a
2^29 block, since none are yet available, we could carve it from the
first available, next larger block, the 7*2^28 block, leaving the 2^29
block and a 5*2^28 block.  This sort of action continues until we run
out of 2^31 blocks completely, and we start issuing blocks from the
smaller-block-size lists when the requested size is not available.

A stagnation problem occurs in this scheme, though, when the vast
majority of clients request 2^31 blocks, since these will move quickly
through the proxies, whereas the smaller-sized blocks will tend to
collect.  This works against the FIFO buffering the proxies provide.
In the limit, stagnation could cause the smaller sized block lists
to grow without bound.

You could probably fix the stagnation problem by implementing a flush
timeout (where the smaller block-size lists get flushed every so
often), or by implementing some other sort of timeout mechanism.  In
the end, though, you'll still have clients buffering a varying amount
of work, so you haven't solved the original problem.

THE PUNCH LINE
==============

I believe the real fix is to express buffer sizes in equivalent numbers
of 2^28 blocks, so that the total workload for a buffer of size N is
constant, regardless of whether the blocks are 2^31 or 2^28 or
something inbetween.  That way, if I say "buffer 200 blocks, preferring
2^31 blocks", I could either get 200 2^28 blocks or 25 2^31 blocks or
some combination in between.  You will still keep the block-size
preference setting so that you could place an upper bound on the
granularity of work attempted at any given time.

Thoughts?

I know that I would set my offline computers to use 2^31 blocks instead
of 2^28 blocks if such a scheme were implemented.  Anyone else in the
same boat?

Regards,

--Joe

--
+------- Joseph Zbiciak ------+
|- - - j-zbiciak1 at ti.com - - -|  "The truth of a proposition has nothing
| -Texas Instruments, Dallas- |   to do with its credibility.
|- - #include <disclaim.h> - -|   And vice versa."       -- Lazarus Long
+-----------------------------+

--
To unsubscribe, send 'unsubscribe rc5' to majordomo at lists.distributed.net
rc5-digest subscribers replace rc5 with rc5-digest

```