[RC5] Rack stuffed full of motherboards ?

Joe Zbiciak j-zbiciak1 at ti.com
Mon Apr 27 07:19:54 EDT 1998


'Sanford Olson' said previously:
| 
| Another idea as far as power supplies, network cards, etc goes is to find
| all those 286 and 386 PC's that most companies have sitting in the basement
| (you could go door to door :)), gut them, and use power supplies, diskette
| drives and video cards from them in your rack.  Power supplies from clone
| 286's will work fine with Pentium-class Baby-AT motherboards.  Instead of
| boot ROMS on the network cards, you could use boot diskettes.  Also,
| there's probably plenty of old coax Ethernet network cards around as well
| (I know have 8 or so laying in a box somewhere).

Of course, these old power supplies take tons of power just to run 
themselves.  :-)  That could give you a heat problem. 

You have hit on a good idea, though, which is booting from floppy.
Floppy drives are darn cheap these days and most modern motherboards
have a floppy controller on-board.  If you're lucky, they also have a
serial controller on-board, meaning you can hook all these together
with PPP over serial.  (Yes, NFS over a 5K/sec line is slow, but it's
good enough if you keep file accesses to a minimum.  I used to use this
*over a modem* for my /bin and /usr/bin directories, and it worked well
enough. :-)  You could keep the RC5 binary and buffer files on a RAM
disk, and just access the linux stuff (eg. /bin, /usr/bin) over NFS
if/when its needed.  The core PPP stuff would need to be on the boot
floppy (pppd, chat, ifconfig, route, and some /etc files).

Now, you could take two approaches for making the PPP links: 

(a) You could daisy-chain them.  

    Pros:  Each machine likely has two serial ports, including the 
           "master" machine that has a real network connection and real
           hardware.  This approach requires no expanding of any machine.

    Cons:  If one machine in the chain goes down, then all the machines
           past it become unaccessible.  Bandwidth to the interior-most
           machine sucks since the machines in front of it are closer
           to the "master" host.

(b) You could arrange them in a "star" configuration.

    Pros:  Reliability:  Each of the "slave" hosts only depend on the
           master machine.   Bandwidth is also alot better (although
           5k/sec - 10k/sec doesn't go *that* far.  :-)

    Cons:  You need some sort of multiport serial card in the master, 
           and a driver to support it.

(c) Mixed system -- a little ethernet, a little serial-net, and so on.
    If you really had alot of slave nodes, you could consider a mixed
    approach like so:

    Master Node  -- your main, fully featured PC.  Has ethernet, and dialup
    Middle Nodes -- Have ethernet, four serial ports, and maybe a small HD.
    Slave Nodes  -- Have a floppy drive and a serial port.

    You could hang four slave nodes off of each middle node, and then 
    plug all the middle nodes into your ethernet hub.  The Middle nodes
    have four serial ports in this scenario, assuming 2 onboard, and 2 in
    a cheap add-in card.

    Pros:  Reliability:  This configuration has almost the level of 
           reliability as the "star" configuration.  Bandwidth is better,
           especially from the middle nodes to the master.  (Perhaps the
           middle nodes could run pproxies?)  Packing density:  Only the
           middle nodes need expansion cards -- the slave nodes probably
           do not (if they have on-board serial ports).

    Cons:  Complexity:  The system is now heterogeneous.  Cost:  You now
           are buying and supporting an ethernet (although you only need
           one card for 5 machines) and you need to buy one serial card for
           every ethernet card you buy.

(d) Smaller mixed system -- same as above, but skip the extra serial card.

    Pros:  Cheaper than (c).  Also less problematic, because finding 
           serial cards which will go onto IRQs other than 3 or 4 isn't
           always easy.

    Cons:  Lower packing density, since now you have 1 "carded" machine
           out of 3, instead of 1 out of 5.


There's another issue to consider here, which is thermodynamics.  If you
pack these boards close together, you could run into a heat problem if you
use power-hungry CPUs.  You may want to consider having a box fan blowing
across each group of mother-boards.  :-)


Thoughts?


--Joe

-- 
  +------- Joseph Zbiciak ------+
  |- - - j-zbiciak1 at ti.com - - -|   without you, everything falls apart
  | -Texas Instruments, Dallas- |   without you, it's not as much fun
  |- - #include <disclaim.h> - -|   - NIN -      to pick up the pieces. 
  +-----------------------------+
--
To unsubscribe, send 'unsubscribe rc5' to majordomo at lists.distributed.net
rc5-digest subscribers replace rc5 with rc5-digest



More information about the rc5 mailing list