[Hardware] The market of ASICs (One GigaKey / Second?)

jbass at dmsd.com jbass at dmsd.com
Wed Aug 11 03:47:52 EDT 2004


david fleischer <cilantro_il at yahoo.com> writes:
> The limiter is not the semiconductor, but other
> factors. The random noise in the carriers goes down,
> theoretically until absolute zero. Devices that work
> on electric fields continue to work, only better.

For a while I worked with MRI machines, and part of what
was in the back of my head was that the metal layers might
go supercon and cut the waste heat as is does in field
magnets. Not sure where the cofficient of expansion would
cause the die to fracture at the metal/semi bonds.

> At low temperatures the moisture from the air comes
> out and bathes the chip in a poodle of water, thus it
> needs to be constantly purged. Eventually what fails
> is the package, or the bond wires, or both.

It was pretty clear I'd have to dump the system in a vacumn
for a while to pull the moisture out of the packaging and
board, then transfer to a sealed tub that was on the otherside
of tubing also bathed at 10K or so to condense out everything
before it got to the tub.

> I made a theoretical calculation for 10degrees K, but
> in practice the cooling will be much less. In an FPGA
> the routing necessarily needs to be substantial and it
> will tend to dominate the delay. Thus the speed-up
> will be modest. Perhaps less than 10%.

The proof of concept design I'm working with for a home
reconfigurable super is 3,500 FPGA's, 1,500 ZBT Rams,
2,500 SDRAMs, and 1,750 MIPS/PPC processors in a cubic foot
or less.  The power density *IS* the most serious design
problem, along with the waste heat. Anything I can do to
cut the waste power is a huge plus. Current plan is to use
copper plates milled to conform to the components on both
sides as the power/ground planes *AND* heat sink with water
cooling off the edges of the stack.

If I can cut the waste heat and pickup 10%, that's probably
a design win for this toy. The primary win is keeping all
the system interconnects under a few inches, so the interconnect
latencies are minimal to non-existant compared to a room
full of Fibre connected clusters.

Target architecture is a chained hypercube/torus structure
with XC2VP50's for external I/O, plus along the edge of the
8x8 PE boards is a dense parallel stacked board interconnect
on alternating edges to form a chain from the Hypercube/Torus.
That plus a couple terabytes of Fibre Channel hard drives should
make a fun home computer for thesis research.

I have most of the parts already, just shy a bit of memory,
and high density stackable connectors. Another couple grand
of PCB's and it should start getting built - my toy budget
has been suffering a bit lately.

John


More information about the Hardware mailing list