[PROXYPER] Proxyper 300+ a lemon? (long)
jlawson at bovine.net
Mon Feb 15 22:39:55 EST 1999
I'm sure this message will undoubtedly generate a number of significant
replies. Please understand that the original message to which I am
replying was very painful to me, as well as many of the other significant
criticisms that have been made on this mailing list. I have already had
to stop and resume composition of this message several times because of my
frustration. If you do make the decision to reply to this message or a
thread to it, please give your message some thought before sending it.
It is upsetting to see the amount of non-useful content floating across
As has been noted several times, and even within the text file that
accompanies the proxy, the 300+ builds are a complete rewrite of the old
proxy code, with very little reuse or reference of the original code,
since it had become unacceptably structured and constrained by its
original design and implementation.
Because of this, the difference between the pre 280 and the post 300
builds is significant. Features that are not present in a 300+ are not
necessarily because of a desire to spite users, but quite likely the 300+
series is still under development. And I might add that development on
the proxies (and full servers, and keymaster) is almost entirely done by
one person, myself, without compensation. I am in fact a full time
student, and I started distributed.net because I enjoy doing this, and I
do not appreciate receiving non-constructive feedback regarding anything.
Yes, I am frequently forced to make development decisions, and many times
they do indeed provide slight compatibility differences with previous
versions, but everything being done is in the hope of a forward direction
of development. Development decision that I typically am inclined to make
encourage more modular support and easy integration with future efforts of
distributed.net. I realize that the source is mostly closed at this
point, so you are not able to see the actual structural changes being done
behind the scenes, but with the 300+ builds it is tremendously easier to
support new contests. In fact, we already have an initial implementation
of OGR support within the proxy codebase and are working to get its
development completed and integrate OGR into the client code as well.
The transition to the 300+ builds was indeed rocky, and I wish it could
have been smoother for the users running personal proxies. But
truthfully, most of the design goals for 300 was to improve efficiency for
high block-count proxies, in particular the full servers within our
network. Supporting full servers with the great number of blocks that was
necessary to provide continous operations for several hours in case of a
keymaster lost of connectivity was prohibitively expensive in terms of
RAM usage, but also in terms of I/O usage (constant rewriting of buffer
files perhaps many tens of megabytes in size every few seconds).
Furthermore, the non-coelesced transmission of network blocks back to the
upstream server was extremely inefficent in terms of packet counts and
packet collisions on the networks which we were operating the full servers
With all of these design issues in mind, the new 300+ build proxies would
entail a complete rewrite of contest handling to support easy insertion of
new contests without significant integration issues. In memory buffering
would have to be completely rethought to allow a partially-cached and
incremental access to buffer files that was not dependent upon the number
of blocks being cached. The filestructure of the disk buffers had to
become much more complex, and it is now infact a mini-filesystem that
internally maintains use and unused, and allocation sequence of the
allocation units within the flat file itself. As such, it can suffer from
the same problems that face filesystems on any complex operating system
(lost allocation units, corrupted allcation tables, cross-linked units,
fragmentation, and many others). The -repair option attempts to fix all
of those issues (including fragmentation) when you invoke it, and it
attempts to fix things as best as it can. On top of that, the network
subsystem within the proxy has also had to be completely rewritten so that
now there are distinct layers between the different encoding and
translation schemes with full bufering between them to attempt to
encourage larger network packets to be sent at once. With the distinct
layers between network encoding subsystems, it should now be easier to
implement data compression between full servers and the keymaster in an
attempt to reduce the network useage that we currently consume. We
actually experimented with implementing data compression within the 280+
full servers, but due to complications within the design assumptions of
the original proxy code, it was a far from desirable (and actually not
terribly operational) implementation.
>From this point of view, I hope it can be appreciated that most of the
design goals of the personal proxy revisions are actually not to support
the personal proxy, but instead to support mainly the full servers and the
keymaster, which in turn support a far greater number of users than those
supported only by a personal proxy. Although I try to design and release
a personal proxy that might contribute some benefit to users in personal
capacity situations, the painful truth is that the personal proxy is not
the primary focus.
Since so many personal proxy users seem to greatly dislike the increased
complexity inherent in the new buffering structure, I may eventually
decide to completely separate the pproxy buffering code from the new
vuffering code that will continued to be used byt he full servers and the
On Sat, 13 Feb 1999 root at brain.acmelabs.com wrote:
> On Fri, 12 Feb 1999 rc5 at xfiles.nildram.co.uk wrote:
> > What exactly makes the 30x proxy 'a lemon'? It would be nice if you back
> > up your arguments slightly...
> Oh! I have! I have! Back then I first made the mistake of upgrading to
> 30x to take advantage of the desIII quickstart thing.
> Thankfully the worst problem, the proxy fetching waaay too many blocks
> has been fixed.
One of the major design goals of the new proxy source code was to improve
the performance and efficiency of the higher capacity proxies, in
particular the full servers (the full servers run a proxy that shares much
of the code that is used within the personal proxies). The over fetching
in low-capcity pproxies was a result of over-zealous windowed fetching
done in an attempt to improve network efficiency.
> The next two things that burn me is the removal of two features, which to
> me were quite important. THe datached and showcumulativestats lines in the
> ini file.
> About detached, sure its a commandline option but why should I have to
> type it every damn time instead of throwing it in the ini file and forget
> about it? This change is not a feature nor a bug, just programmer error.
It may be believed at first glance that the change is arbitrary, and in
some respects it is. It could be easily implemented in either manner,
however I do have design goals in mind when I moved it. In particular, I
have made all of the command line options (-unlock, -repair, -detach, no
option, -install, -uninstall) selection of major modes of operation that
affect how the proxy starts up and is unchanging throughout the continued
operations of the proxy. Although it would not significantly obfuscate
the code much more by allowing the detach option be specified as an ini
option, as before, is it really that big of a deal to warrant arguing
over? The build 300 release gave the opportunity to rewrite something
that had grown unreasonably entangled with dependencies and difficult to
maintain. Consolidation, regrouping, and reorganization of features were
some of the things the rewrite was able to afford us. The movement of
-detach is included.
> About showcumulativestats, my stats program used to use this to monitor
> keyrate in semi realtime, if it dropped below a certain point I knew
> something was wrong. With a computer or a network or something. That and
> the stats it provided were nice. Those are gone, feature? bug? No, just
> programmer laziness.
Cumulative block counts were absent from 300-303 inclusive merely because
it was inadvertantly left out, since it was a non-critical option.
(Remember the 300+ code was implemented from scratch, without much
reference to the old 280 code, since the two code bases were designed to
lack much similarlity). Cumulative block counts were added back in 304
(always displayed, without need to explicitly enable it) and additionally
the status message display period was made adjustable (it was hardcoded to
30 seconds in the early 300 builds and never made the conversion to a
configuration directive within the ini file until 304).
"%s r=%i/%i, d=%i/%i, %.1f %s/sec, tot=%d",
readyb, maxready, doneb, maxdone,
> Next, as if that wasnt enough. The logcompressing feature is broken.
> I used to use this to feed a whole days logs into another little stats
> program. It was nice, it was efficient.
Yes, I agree it was useful. The fullservers all utilized it exclusively
for trasnferring out logs by ftp to our file backup/archive server.
Furthermore log transfers from the keymaster to the stats server was done
partially by it was well. Having to disable it affected us as well. But
truthfully, the extremely frequent buffer corruption instances that were
occurring when logcompressor was invoked was a more serious issue.
Greatfully, the issue has now been resolved. It turned out that the
forked child was implicitly performing libc-related stream flushing
(triggered by writing to the logfile from the child process) at an
inappropriate time that was also causing other filebuffer disk flushing
activities to occur. The logcompressor will be reenabled in the next
version of the personal proxy, but I would prefer not to make a release at
this point, since we are in the middle of many of significant internal
code revisions within the proxy.
> Software that degrades with each new version is a lemon if you ask me.
> Did microsoft take over proxy development while I wasnt looking?
Please save the non-constructive criticism for people who are not hurt by
what you say to them. As I've mentioned, I take my participation within
distributed.net very seriously, ever since my initial creation of the
first buffering gui rc5-56 proxy (which probably marked the start of
distributed.net itself), which eventually evolved the keymaster, then full
servers, and the win32 gui personal proxy. The transition from the gui
servers to the unified console proxies was difficult as well. Most of you
reading this were probably not even aware of our activities at the time I
just described. I am not necessarily asking for loyal follwers to bow
before me, but I am asking for a minimal amount of respect of the
tremendous amount of time that the distributed.net coders and organizers
(not specifically me) have put in to keep things working smoothing, or
working at all, for that matter.
I realize that stats are an important issue for many of you, and it hurts
me to see that so many users are discouraged by their inability to see
their daily placement. We are already working as best as we can to get
things working reasonably well again. However, I'm sure that once stats
are again made available, we'll continue to receive criticisms from people
regarding anything they feel is worth criticising. The messages insisting
that if we only reimplemented our database as a flat text file with grep
and sort and cut, would be much more efficient if we only ran linux on a
486 with 8 MB of RAM. (Please don't comment on this, I don't terribly
care about stats issues, nor do/did I have any role in stats provision).
In conclusion, please be patient when you are frustrated with something.
We all get frustrated, but complaining unconstructively don't benefit
Jeff Lawson http://www.cs.hmc.edu/~jlawson/ http://www.bovine.net/
Jeffrey_Lawson at hmc.edu jlawson at bovine.net bovine at distributed.net
Programmer, Developer, Mascot, Founder of the largest computer on earth!
Don't waste those cycles! Put them to use! http://www.distributed.net/
To unsubscribe, send 'unsubscribe proxyper' to majordomo at lists.distributed.net
More information about the proxyper