How does Intel (or AMD) construct their CPU in the same product line to have different clock rates? For example, the i7 3270qm and 3920xm both have the same number of transistors and 22nm manufacturing process, but how does Intel build one to be faster and the other slower? Do they start off with the same CPU and somehow reduce the clock speed or take away certain features? Do they change current and voltage? None of the above?
I'm just curious and appreciate any replies.
-
-
They make all their CPUs in the same way, but there are always variations, they run some tests to determine how high they can go and what features they can support and label them accordingly. Anything not passing the test for the lowest binned CPU is usually recycled. This process is called binning.
-
They burn in a tiny bit of memory on the chip that specifies the settings. They actually put very few cpu's through any tests to determine their performance. They are very good at guessing it. Also chances are that even the lowliest processor in a series, if given the settings of the highest, will function just fine. Intel and AMD determine how many of each processor to make based on demand, not the ability of the processors. When they bin, first priority goes to extreme chips, then mobile chips, then desktop chips. Also in some cases, Intel or AMD disable, or have disabled parts or features of cpu's that were non-functional in order to salvage the product.
-
To put it simply, all CPUs are manufactured as Extreme Editions (or top of the line in product lines where Extreme Editions don't exist). Some come off the assembly line with manufacturing flaws that prevent them from reaching their full potential, but the vast majority are purposely crippled by Intel/AMD to meet market demand at lower price points - after all, not everyone is willing to pay $1000 for a CPU.
-
-
Thanks for the replies. They helped.
-
-
I JUST took a class at university that involved this stuff. Basically it is all about logic. Every chip can be overclocked. To what extent, depends on how it is manufactured. The "clock" is actually a signal based on which all computations in a CPU take place. If the clock is too slow, you lower the computation speed and your CPU appears sluggish. If the clock is too fast, you your internal signals, which depend on the clock, will not be stable before the next clock "cycle". Each clock cycle gives a certain command. As such, if your previous signal is not stable, then you are basically telling it to do too much.
Think of it like asking you to run to the grocery store and then give the car for repairs except that I am forcing you to drive the car to the mechanic quicker than it is physically possible for you to come back from the grocery store. It all goes aaaaaaaaaaarrggghhhh and crashes.
CPUs are not "manufactured" for a specific speed. They are designed for a specific speed. They can be run at a variety of speeds. The 3720QM can run at the same speeds of the 3820QM without any problems. However, then Intel won't have two brackets to make money off of. -
HopelesslyFaithful Notebook Virtuoso
i wish they would hack a lot of their product line...utterly pointless...why the hell do you make a B940? Just make an i3. Even the i3 is cheap and lack luster. Why make a chip that is even worse?
-
niffcreature ex computer dyke
The b940s are probably the seriously botched chips.
To put it simplest, cpus are NOT manufactured to be a certain speed at all. They are manufactured kind of like car parts, imprecisely. And then some crazy car guy comes in and measures all of them precisely to find the perfect parts and build a super efficient super fast car.
Funny because this actually happens with cars.
What bothers me the most is that the Extreme CPUs are NOT necessarily top of the batch, they just have unlocked multipliers, which intel could do to an i3 if they wanted to and call it an i7-3960xm. -
HopelesslyFaithful Notebook Virtuoso
-
Each chip is not identical though, and they don't actually decide what speed the CPU runs at. My old lecturer went for a guided tour through a manufacturing plant and essentially they make the chips to a specific specification, and then after production the chips will run at different speeds according to their each individual flaws/characteristics/whateveryouwanttocallit resulting from the manufacturing process. They then test 10% of all chips off the production line to get the "range" at which that series runs, then they simply use the "average" as the specified speed and then round it to the nearest whatever and then every chip in that series is adjusted to that speed (some are overclocked, some are downclocked).
This is why when you have 2 exact ships one may overclock higher than the other, because that specific chip may have been at a higher clock rate originally and was then down-clocked to specification, while the other may have been lower and had to be overclocked to reach specification which means it ultimately can not overclock as far. -
-
They do bin based on quality/performance though. That's why Extreme chips are so expensive. They are cream of the crop. A lot of times they can predict to some extent what they're going to get, but it's all based on quality control not by design.
-
HopelesslyFaithful Notebook Virtuoso
@Qing how do you know this?
-
-
-
-
I was told the following -
A contraption exists into which each chip, by machine (not man), is inserted and power draw, heat dissipated and overclocking margin are checked. The entire process takes under 2.38 seconds per chip (not including mechanically moving the chip from the assembly line to the said contraption and placing it back). -
-
This is a download link to a pdf from Intel. One of the last couple of slides talks about testing. That being said, it is from Intel and can potentially be extremely biased but considering it looks like a presentation given to some organization, I doubt it. Again, it LOOKS like. May not be. Everything said by me here, except the link itself, is speculation and has no sources.
-
Meaker@Sager Company Representative
Certainly all chips must be tested, especially those headimg for the mobile market for leakage.
There is also a maximum design speed based on the current technology. Also simulations are done based on the current known process and targets are set for frequencies during the design.
The quality of wafers used can also have an impact, first run turion x2s could run at much lower voltages than their later brothers due to the best wafers being used to test the first chips.
Intel could never risk a cpu not being tested. -
Yeah, ever heard of a defective CPU, while they do exist they are very rare. Besides, at ~3secs to test a chip and with the automation of the process of picking it up, putting it in the tester and putting it back; the testing of a chip is likely to take mere seconds anyways. It would be bad for Intel not to test their CPUs for problems. Note that i do not work in that field so it's mere speculation on my part while others im sure could use sources to back themselves. However, from an engineering perspective, it doesn't make sense not to test them for at least the frequency at which they want to sell them.
I'm inclined to trust that PDF from Intel, it's probably written to show Intel in the best of lights, but i don't doubt the legitimacy of the info in there.
That said, if most of their CPUs pass the tests for the higher bins, then they can sell them as whatever other CPU they want as long as it's below that bin. If i wanted to minimize the amount of testing done, that's how i'd do it. -
By Intel testing "very few" processors I meant putting processors through the paces and testing the capabilities of the chips. I did say that Intel tests every chip to see if they work or not, but it is very few chips that are given further testing to determine the full capabilities and limitations of the chip. It also depends on how good the yields are and how well they are able to predict the full nature of all the dies coming off the assembly line. They are trying to manufacture as many processors as they can in the least amount of time and at the lowest cost. Anything more than cursory testing of every chip does not help that. I could be wrong though, because everything I know is dated at the minimum a few years. I just assumed they really wouldn't have changed much.
-
Meaker@Sager Company Representative
It never worked that way, each chip goes through that power/leakage and frequency test. Extensive testing is done to ensure the validity of the quicker test, much like safety tests on a car. However no matter how good you are you have to test every chip at its rated frequency and tdp.
There are design limits, clock targets and binning with artificial limitation for market demand. -
Actually this industry is extremely inefficient.....comparatively. Only about 25-30% of the chips that come off the production line are actually as they were supposed to be.
Let's assume that the PL is manufacturing the desktop 7970. Defects usually are not widespread. They are small couple hundred micron wide defects. What do you do? You disable that region. There is a very high probability that that region is a smaller part of an entire CU. What happens when you disable one CU on a desktop 7970? You get the desktop 7950! It is sneaky. But it works. It is not illegal. It violates nothing. It gives them profit. And it gives customers a cheaper option.
Sometimes defects give rise to instability in logic or more power dissipation. So what do you do? You clock it lower, call it the 3720QM instead of the 3820QM and you have an entire new product that you spent no extra resources developing.
It is also because of these defects that slick's 7970m can be benched stable above 1000/1600 and mine crashes above 985/1575.
That being said, you could never have done this if each and every chip was not tested. -
Even Ivy Bridge, with all the manufacturing issues accompanying the transition to 22nm, can easily hit 4 Ghz. -
If Intel sold the 2600K at 4GHz stock, then no one would purchase their extreme edition CPUs. Very few people care to go above 4GHz simply because there are very few applications that use it. If someone needs that much processing power and has the money to afford it, chances are, he or she is in a position to access a server to offload his work onto.
Failure rate of chips off the assembly line is under 35% regardless of who manufacturers them. -
You have no idea what Intel's yields are. None at all. Yields can vary greatly and depend on many factors.
I don't know where you get your 35% failure rate from, but I can assure you that it is wrong. In fact I can even prove it. If you have a wafer with a single defect on it, and the dies are very large so that there are 10 dies per wafer, that is 10% of the dies that are bad. If you have another wafer with a single defect on it, and there are 100 dies on the wafer, that is only 1% of the dies that are bad. There is a lot more to it than that, but this is an easy way to see how it is not some simple figure. -
LOL. You know man, I really don't get you. I ain't arguing with you anymore. I am paid a large amount of money every month because I know this stuff. Not because I pull numbers out of my backside. All I did was a short google search. Took me 5 seconds. I just don't want wrong knowledge about my field being spread on a popular forum.
-
You are saying that yields are 25-30% when in fact that is not the case, and yields can vary greatly. And also the amount of rigorous testing and binning varies greatly. I really don't know what makes you think this stuff is so stable.
And look, you even completely contradicted yourself here:
-
Meaker@Sager Company Representative
You are asking for hundreds of billions of transistors to work flawlessly.
I'm sorry but you seem to have very little understanding of how things actually work.
Also I can tell you intel is not operating at 30-40% yield, if they were people would be fired.
Also the yields are affected by chip size and clock targets, so yes while high end graphics chips may be down to the 30-40% levels thent the 7770 and 7870 chips will be much higher. -
Stick to the facts, the way this thread is going now it'll get locked pretty quickly.
-
mav,
The source for 30% yield from large wafers on the wiki page was from this article - AnandTech - The RV770 Story: Documenting ATI's Road to Success. Given a CPU / GPU will be mostly made of transistors, registers, etc., but is the substrate used by AMD/ATI exactly the same as intel? Would the fact this was a GPU vs. a CPU matter?
Unfortunately, I'm a software guy not a hardware guy, so I don't know the answers to these questions.
In regards to what yields / wafer may be, here's a dated article which suggests 70% yields are sufficiently high - Production Yield Per Wafer & Pricing : Intel's New Weapon: Pentium 4 Prescott -
Meaker@Sager Company Representative
No, 85% was possible, 70% was acceptable. And this was prescott, perhaps intel's lowest moment arch wise for power consumption and leakage.
-
From that article on Tom's hardware, you can see that they didn't try to pretend that they knew exactly what it was and they understood that it is quite variable.
-
My guess is manufacturers / fab facilities keep those numbers protected for competitive advantage.
I'm surprised how this thread's transitioned from testing CPUs to binning them (which means they are definitely tested for clock frequencies / heat dissipation ), and now moving from binning to now wafer yields.
What a strange day. -
Meaker@Sager Company Representative
Well the OT is actually technically about the design phase rather than the testing phase as that is the only time a CPU is designed to hit a frequency.
The process may be able to meet it, it may fall short or it may exceed it.
When you look at scaling as CPUs dont scale perfectly with frequency, there is a performance curve and having the right amount of bandwidth/cache to feed certain speeds is just as important as the process.
SO basically your question is too simple? Lol. -
HopelesslyFaithful Notebook Virtuoso
yep i am reading this but taking everything with a grain of salt
-
Looks like my warning went unheeded by a few, thus thread closed.
How Is a CPU Manufactured to be a specific speed
Discussion in 'Hardware Components and Aftermarket Upgrades' started by misterhobbs, Jun 21, 2012.