Question...why do GPUs lag behind CPUs in die size? They seem to milk the crap out of a die size.
-
HopelesslyFaithful Notebook Virtuoso
-
that's a good question
-
H.A.L. 9000 Occam's Chainsaw
Well, Intel has fooled everyone into thinking that moving down die sizes is easy. It's not. It's very difficult and very expensive.
Difficult and expensive aren't things investors want to hear. So when they actually DO move down a die size, they make sure to milk it for all it's worth.
Plus you have to remember, there's more transistors on a high-end GPU than there are on a current high-end CPU. 3.5 billion for the GTX 680 vs 995 million for the 2960XM. More transistors = larger physical die = more $$$. Also since the dies themselves are so large, when you have bad yields, that becomes VERY costly VERY quick.
EDIT: I compared a desktop GPU to a mobile CPU. So to make it more fair, GTX 680 = 3.5 billion transistors vs 2.7 billion with the i7 3960x. Those are the cream of the current crop. -
Like always, it boils down to money. Larger die size = less cores per silicon wafer = less money.
-
HopelesslyFaithful Notebook Virtuoso
um from what i remember its cheaper to make a CPU on a smaller nm...smaller the die more you can fit on a single wafer. So a 22nm with 1bil transistors compared to a 32nm with 1 bil wafers is cheaper because you can fit more CPUs on a single wafer so your saving money. You are getting more cpu's per wafer which means you can sell more.
Also the more transistors you have on a larger die size means the more expensive it is to make.
One of my theories is that intel buys out the FABs where they build units first. Lets say TSMC just build a 22nm FAB....my guess is intel puts the money out first to get access to that FAB before the graphics card companies can get their hands on it....or out bids them
EDIT: reread what you said hal and it would make more sense to shrink the die because you would have a higher yield per wafer which would save you money -
H.A.L. 9000 Occam's Chainsaw
Where you save money with die shrinks, you lose it in R&D. Serious money and talent has to go into the designs of smaller dies. It's not just a shrinkage of the transistors. Plus yield issues... every time you step down a die size, you have to remaster the process. In the beginning it's very expensive. -
It's because Intel's fabs are well ahead of the game. If they made dedicated GPUs, it would probably be on 22nm right about now. But since AMD and Nvidia has to rely on TSMC, the best they have right now is 28nm.
-
Yes, Intel is ahead of TSMC/SMIC/UMC/Samsung/etc in process technology, and since AMD and Nvidia are both fabless, they rely on TSMC's process.
A "die shrink" isn't as simple as many seem to think it is. With every shrink, there's tons and tons of testing and yield optimization to be done. While theoretically, a chip produced using a 22 nm process would require less raw material than one produced using a 28 nm process, the real cost isn't there: it's in the yield, as H.A.L. 9000 did a great job of explaining. When you move to a new process, the functioning/dead chip ratio is low.
My dad would be much more adept at explaining these difficulties and the disparity between Intel and TSMC, since process technology is the main basis of his job.
Haha, no, that's not how it works at all. Intel builds its own fabs at tremendous cost (which is why they're basically the only ones who can afford to do so), with brand new technology, for every new process. It can't just go out and buy a fab that's for sale.
TSMC can afford to do the same because they have volume: many, many fabless chip companies go to TSMC with designs that they want TSMC to follow to produce a chip. TSMC meanwhile uses its capital to develop and build a new fab on a new process.
No, see above. The cost is not only raw materials; you "pay" for every dead chip as well. -
Even at the time AMD had its fabs, it still lag and need some help from the mighty IBM(said to be the only one who may be on par with Intel)
There was one of the famous quote by Sanders(AMD founder), "real man have fabs". Going fabless made them no longer a threat. -
H.A.L. 9000 Occam's Chainsaw
IBM is partnered with TSMC and Samsung. Matter of fact, they also do business with GloFo, which brought a new fab online in January in upstate New York. GloFo's fab is 28NM. IBM is still using 45NM at East Fishkill, I believe.
EDIT: IBM only has 2 fabs. TSMC has 14. -
I was talking about the time when AMD needs IBM's SOI to compete with Intel(when TSMC/UMC wasn't even a concern between them), haven't followed this field for over a decade and IBM nowadays have basically left hardware(or even software) ?
GloFo as we know is the spin off of AMD's fab business. -
How stupid ppl like me think on this topic:
You must have 1 cpu per computer.
You don't have to have a dGPU on a computer.
Simple
-
You have a point. Basically, only Intel has the cashflow(volume) to afford the ever increasing fab cost. One of the reason why HP partnered with them for the Itanic was because of fab cost.
-
IBM has found that it's repeatedly missed the boat when it came to profiting off hardware (Thinkpads come to mind--IBM never turned a significant profit off the division, while Lenovo's raking in the cash; another example is their HDD business, sold to Hitachi), so it's increasingly focused on services, which are highly profitable. It still invests heavily in R&D, though, adding to its formidable intellectual wealth.
-
H.A.L. 9000 Occam's Chainsaw
That, and Intel ran them out of the hardware business... mostly.
They can only compete with Intel on the server front because their architecture is superior in a server/cluster environment. Blades with Power7 are what real business is done on. IBM couldn't and still can't keep up with Intel's fabs though. It was the whole reason Apple left them, because they couldn't hit the deadlines needed to get TDP under control for the speeds required.
At the time, though, IBM's CPU's showed up Intel's offerings in many different ways. Mainly vector performance. AltiVec was several times more powerful than what Intel offered. It's just that the silicon at the time ran as hot as the surface of the sun.
Question...why do GPUs lag behind CPUs in die size?
Discussion in 'Hardware Components and Aftermarket Upgrades' started by HopelesslyFaithful, Apr 22, 2012.