[rc5] An Idea for the V3 protocol

Chris Arguin Chris.Arguin at unh.edu
Mon Oct 27 12:32:05 EST 1997


On Mon, 27 Oct 1997, Sebastian Kuzminsky wrote:

>    Once you let the clients say "i need somebody to process this",
> things change:  it's no longer a centralized system with distributed.net
> at the top, proxy-servers in the middle, and us clients with our CPUs at
> the bottom.  Instead you have a generalized computing resource, where
> any machine may request more CPU from the network.  How much CPU is
> actually made available to the requesting party depends on how many
> people they can convince to part with their resources.
> 
> 
>    Of course, if we're going to do that, then we might as well go all
> the way and share all our resources (CPU, memory, disk, ...).

Yes, the system I envision would allow the user to configure how much
of each resource will be available to this job. Considering current
network speeds, however, I think we can treat memory and disk as the same,
at least as in regards to other clients asking for information. Obviously,
for any local processing it makes a big difference.


>    Ideally the software developed would be usable and efficient in both
> local distributed computers (a smallish cluster of trusted, cooperating
> machines) and widely distributed computers (a huge network of mutually
> untrusting, cooperating machines).
> 
>    This general resource-sharing system described above is significantly
> more complicated than the current distributed.net effort.  In the Amoeba
> model, hosts run process servers, with some degree of access restriction
> and authentication.  These servers present an interface on the network,
> and allow or deny, based on site-specific policy rules, remote users to
> spawn processes.  Similar facilities exist for accessing other
> resources, such as memory and disk.  Truly gigantic virtual computers
> can be built effortlessly using these primitives.  (There is still the
> problem of trust.)

Yes, that is a major problem. One ideal of distributed.net, as I see
it, is that anybody can easily donate CPU cycles to any process. We
don't want them to have to be registered, and then wait for approval
before they can help. That will stop a lot of people from helping.

>    Depth-first is not a viable search algorithm for chess.

I might have stated this wrong, although I am a bit out of my league here.
People have commented on using distributed.net as a chess engine. The main
idea seems to be to split up the possible moves, and have each machine
report back the top ten or so moves it found. The problem with this is
that each machine can only search so deep, and may miss moves that are
better in the long run. I was thinking of having each client hand off it's
premuation of moves to other clients.

Actually, a better design would be this: The server creates a matrix of
each possible move (only one move in advance), with a score value. As each
client evaluates moves based on that, they update the score. When a
certain amount of time has elapsed (or possibly if one move seems
overwhealming good), the server tells all the clients to stop evaluating.
In this was, the server picks the best move it could find within 'x'
minutes, just like any human would do (in a timed game).

Note that this implies that the server can reach the clients, which is not
currently part of the distributed.net design, and adds yet more
complexity. 

> ]                                          Secondly, it seems that RSA-155
> ] takes a lot of resources. More than available on any one machine at
> ] points. What if we came up with a distributed memory net? Sure, it would
> ] be slow. But it allows the creation of a machine that COULD compute these
> ] values. 
> 
> 
>    I think it would be a great reasearch exercise to implement a
> network-shared memory scheme and using it for the low CPU, high memory
> requirement factoring algorithm mentioned in an early RSA-155 post.
> This may be so I/O intensive that using todays communications
> infrastructure, the job would be prohibitively slow.  How many of the
> people participating in the distributed.net computer connect to the
> Internet at less than 100 kilobits per second? 
>
>    The trick would be to make the resource sharing scheme efficient on
> both fast and slow networks.

For myself, I connect at 57.6k (at least it's really 57.6k, and not over a
phone line). Since one of the nice things about distributed.net is how you
can use it and not even notice that it is running, it would be a real
problem if it started tieing up your network connection.

If the protocol could just define HOW to share memory, I think it would
be the responsibility of that client to ensure 'locality', so that as few
request (or as small request) as possible are made. 

Maybe the clients should register themselves when they first start up. So
the first time you run them, they say, "Hey, I'm an Pentium 166, I'm
allowed to use up to 16 Megs of RAM at any one time and 100 Megs Hard
drive. I am always connected (barring network failures), a 24/7 machine,
and have a network speed of 57..6 kps)"

Then the server could become even more intelligent in distributing jobs.
As an example (using totally arbitrary values), it might say, "Well, the
RSA client runs best on a pentium, but requires 20 Megs In-Core. So the
next best seems to be SETI. Your network connection and resources are more
than acceptible for that"

I know people are gonna complain about the server choosing what project
you join. That is just for most efficient use of resources. We may be
better off without that, just to get more people to join.

I would also like to point out that I am not trying to rule out machines
that are only on the internet now and then. There would be plenty of
projects that would fit that model (like rc5 did). I'm just trying to find
a way to make other projects, that may require machines online more,
available within distributed.net

--
Chris Arguin                 | "...All we had were Zeros and Ones -- And 
Chris.Arguin at unh.edu         |  sometimes we didn't even have Ones."
                             +--------------+	- Dilbert, by Scott Adams
http://leonardo.sr.unh.edu/arguin/home.html |



----
To unsubscribe, send email to majordomo at llamas.net with 'unsubscribe rc5' in the body.



More information about the rc5 mailing list