[rc5] "bad guys": more likely in chess?
robertb at geocities.com
Mon Nov 10 17:43:56 EST 1997
The debate about malicious use of the source code -- hogging blocks,
returning false results -- has been "interesting" but may be missing a
useful point. While being a "bad guy" in the RC5-64 effort might be
interesting from a technical viewpoint, it seems to me that it would
quickly bore the short-sighted in.duh.vidual that might try it.
But what about the other distributed processing proposals, like the
proposed distributed chess engine? I can imagine a heated contest between
distributed teams, and someone gets upset/drunk/stupid enough to attempt to
sabotage the other team's processing by returning incorrect results.
This could be avoided by double-checking all results (like sending each
block twice), but of course that would double the processing time. What if
one query out of each batch were a "check" query, one that is sent to
multiple clients? A non-matching result could flag the administrator to
find out what's going on -- malicious intent or corrupted client? The
productivity loss would be a fraction of the loss incurred by
You could come up with any number of variations on this theme; even a
sliding scale of trust (full double-checking for new clients down to little
or no checking for long-time trusted clients).
Robert Brooks / robertb at geocities.com
http://www.geocities.com/SoHo/4535/graph.html (Wallpaper Heaven!)
http://www.flash.net/~totoro/ (Totoro Consulting / totoro at flash.net)
To unsubscribe, send email to majordomo at llamas.net with 'unsubscribe rc5' in the body.
More information about the rc5