[rc5] Overclocking (wasRe: [rc5] Another approach)

Rebecca and Rowland rebecca at astrid.u-net.com
Mon Jul 7 11:13:27 EDT 1997


>Date: Sun, 06 Jul 1997 18:59:36 -0500
>From: "Austin T." <austhomp at iastate.edu>
>Subject: Re: [rc5] Overclocking (wasRe: [rc5] Another approach)
>
>At 01:23 PM 7/6/97 -0600, you wrote:
>>
><snip>
>>	I have a concern about increasing the proportion of clients
>>running on overclocked machines; possibly due to my ignorance of the
>>overclocking experience.
><snip>
>>	However, in other contexts, accuracy and reliability of results
>>is important.  If I were in charge of an expensive, time-consuming
>>project, I would either use the most reliable equipment available to do
>>the processing--or use faster, less-reliable equipment to do the
>>processing TWICE (on the notion that squaring the small probability of
>>heat-related errors generates sufficient certainty).  Any numbers on
>>reliability of overclocked processors? Even a 1-in-a-trillion
>>probability of undetected errors could render uncertain a significant
>>number of potential keys.
>>
>>	- Richard "Let's go through 2^56 keys.  Twice!" Ebling
>>
>
>That is a very honest concern, and in fact what you are supposed to do,
>according to Tom's Hardware Page, is to run rigorous tests on it to see if
>it is stable.  This includes many benchmark/testing programs such as
>winstone 97.  Depending on what you are doing, you can be as rigorous as
>you want, it's basically "burning in" you computer again to find any
>problems.  Also, Windows 95 itself is a pretty good detector for these
>errors, while you may play in Dos and Dos games at really high speeds,
>Windows might not so eliquently tell you that it just isn't going to happen.
>
>Personally, I have had no errors besides one I accidentally caused myself
>since I overclocked my cpu.  I tried a few other combinations, and my 133
>running at 150 and 166 were quite stable too.  For kicks I tried 225 (3 *
>75), and no go, so I upped the voltage from STD to VRE (I think that's
>it...) and it worked, but my video card didn't want to run at the higher
>PCI bus speed.  That's another potential problem with overclocking, cards
>not up to the more performance.  Also the memory must be up to it too,
>since if it is, it runs faster (well, when you up the bus speed that is).
>
>Right now I'm contemplating whether 187.5 is too much for my 133 right
>now..it's just a little hotter than 166, and 150 is barely noticable as per
>change in temp.
>
>I read somewhere (I might still have the book) that Intel (allegedly) makes
>all the classic pentiums from one dye (sp?) and then tests them, marking
>each one with the speed it is capable of.  Well, it goes on to point out
>that, if there is more demand for slower chips and the supply of slower
>chips runs out, they will just mark higher performing chips as that slower
>speed, therefore never executing at full potential.  This is why not every
>cpu can successfully be overclocked, some are really only capable of that
>speed, and some are better, but there happened to be more demand for that
>slower speed.

AFAIK, this is more-or-less right, but...

When you make something like the Pentium, you start out with disc of
silicon called a wafer - typically 8 inches in diameter these days.  You
make several chips on a wafer (anything from half a dozen to 65,000
depending on the chip).  Each chip is called a die.  The thing that
produces the pattern of circuitry on the wafer is called a mask (you have
one for each circuit layer on the wafer) - the way it works is just like
making a PCB - you coat the wafer with photographic emulsion, expose it to
the pattern by shining light through the mask, and rinse off the unexposed
emulsion (or rinse off the exposed emulsion, depending on the type).  You
then pass the wafer through one processing step, laying down a bit more
circuitry, rinse off the remaining emulsion, and repeat.

The final steps lay down aluminium contact pads to attach the leads to, and
then a layer of glass over the whole lot (except the contact pads) to
protect the circuitry from contamination.

Once the processing's complete, and before the wafer's chopped up into
individual chips, you run it through a wafer test.  This individually tests
each chip *before* packaging.  Now then, characteristics change a bit
because of sawing and packaging, but I suspect this is where the
microprocessor makers decide what to mark the chips as, because wafer
testing is fully automated, very reliable, and much easier than testing a
packaged chip in some respects.  But I might be wrong about where the
decision is made.

Whatever, any sane chip maker will do this: test each IC to make sure that
it's working within the required specs.  If it passes, it's packaged and
sold.

Now, for something like the original Pentiums, what I think Intel did was this:

Use one mask for all of them.

Test each one - those which tested okay at (I forget the original speeds)
90MHz were marked good for 90MHz; those which tested okay at 60MHz were
marked good for 60Mhz.  Those which failed the DC tests or both of the
dynamic tests where marked bad, and perhaps analysed to discover what went
wrong with the processing, but mainly thrown away.

The decision about which speeds to test at and sell the chips as capable of
working at is based on lots of factors - firstly, you design a chip with an
operating speed in mind.  You have a pretty good idea what the process you
are designing for is capable of doing, so you've got a good idea that this
design will work at this speed.  But there's complications of yield - the
bigger the chip, the fewer will work optimally.  So you might find that
whilst your process can, in theory, produce all this circuitry to a
standard that will allow it to run at, say, 110MHz, defects in processing
mean that only 5% of the chips actually work at that speed, whilst 20% of
them will work at 90MHz, and a further 20% will work at 60MHz.  And a
further 20% will work at 10MHz, which no-one wants, and the remaining 40%
just don't work at all.

So what you do is you make a decision to mark your chips as capable of a
certain speed, and ensure that they all will work at that speed.  The point
is that all microprocessors have been tested to ensure that they at least
meet the specified requirements to work at a certain speed - virtually all
of them will work perfectly well at a higher speed (and because the
testing's not perfect, some of them won't actually work at the specified
speed at all).  Exactly how much faster is completely unpredictable - Intel
could probably tell you (but won't) what the highest speed you could expect
from the very best Pentium is, but they couldn't tell you what that
particular one could do without testing it.

Given that the firms competing with Intel-like microprocessors are battling
against Intel, it makes sense for them to bring out higher-speed
microprocessors.  You could do this using the same mask and process as
Intel, but testing at 110MHz, 90MHz, and 75MHz as well as 60MHz.  You'll be
able to sell premium 110MHz chips, and the intermediate 75MHz chip, and
look like you've got faster chips than Intel, but you haven't - you've got
exactly the same chips, just tested at higher speeds and marked when
capable of working at those speeds.

Of course, Cyrix and AMD might like to get hold of Intel's masks and
processing technology, but Intel certainly won't let them - this is just an
illustration.

Hope this sheds a little light,
Rowland.

P.S.  I don't think this is terribly relevant to rc5 directly - if you'd
like to say more on the subject, please do it by email to me directly.


----
To unsubscribe, send email to majordomo at llamas.net with 'unsubscribe rc5' in the body.



More information about the rc5 mailing list