[HARDWARE] Just Courious......

Marcus Gillette gate at zipcon.net
Thu Sep 16 15:32:04 EDT 1999


This sounds like it could be too complicated for efficient performance out
of an FPGA.

i read the article on   http://www.replay.com/cracking_des/crack-1-4.html .

The EFF decided against FPGAs, and had a contract signed for custom chips to
be manufactured.  They only built about 1500 chips for DES.... and their
cost for hardware including all the cases and mainboards was around
$120,000.  If only 10% of the contributors to distributed.net were
interested, there would be significantly more demand than 1500 chips.  I'd
think this might be the most long-term cost-effective price/performance
solution.

my two cents.

Marcus



----- Original Message -----
From: Jameel Akari <jakari at fribble.cie.rpi.edu>
To: <hardware at lists.distributed.net>
Sent: Thursday, September 16, 1999 1:32 PM
Subject: Re: [HARDWARE] Just Courious......


>
> On Thu, 16 Sep 1999, Dan Oetting wrote:
>
> > Have any of you tried doing the math?
>
> No, I'll admit to being less than familiar with the algorithm.
>
> > bringing the total to 1696 gates per iteration (the fixed rotate is
free!)
> > There are 26 iterations in each of 3 rounds (we're over 132k gates now).
>
> Yes, but do you really need to pipeline it that much?  Could you
> do it with a single 1700-ish gate iterative block, and just cycle it
> around 26 times?  You'd need flags and storage registers, but these are
> fairly abundant on FPGAs.
>
> To get more throughput, there may be room to have several of
> these iterative blocks in parallel.  Pipelining it (serially) will
> replicate a lot of pieces over and over again.
>
> > results that carry between the stages. In RC5 there are 29 x 32bit
> > registers that need to be latched between stages needing another 2k
gates
>
> The registers (in Xilinx 4000 series) are just that- you don't
> have to build them out of individual gates.
>
> > layout. These gate arrays don't allow random point to point connections.
> > They are constrained by the interconnect grid.
>
> Right.  Really slow, lossy, nasty interconnect at that, which you
> may not be able to route correctly.  You may have better performance if
> you split the design into several somewhat smaller fpgas.
>
>
> ------------
> Jameel Akari
> Insert witty comment here
> ------------
> <http://fribble.cie.rpi.edu/~jakari>
> ICQ: 27182003
>
>
> --
> To unsubscribe, send 'unsubscribe hardware' to
majordomo at lists.distributed.net
>
>

--
To unsubscribe, send 'unsubscribe hardware' to majordomo at lists.distributed.net



More information about the Hardware mailing list