Question...why do GPUs lag behind CPUs in die size? They seem to milk the crap out of a die size.
-
HopelesslyFaithful Notebook Virtuoso
-
that's a good question
-
H.A.L. 9000 Occam's Chainsaw
Well, Intel has fooled everyone into thinking that moving down die sizes is easy. It's not. It's very difficult and very expensive.
Difficult and expensive aren't things investors want to hear. So when they actually DO move down a die size, they make sure to milk it for all it's worth.
Plus you have to remember, there's more transistors on a high-end GPU than there are on a current high-end CPU. 3.5 billion for the GTX 680 vs 995 million for the 2960XM. More transistors = larger physical die = more $$$. Also since the dies themselves are so large, when you have bad yields, that becomes VERY costly VERY quick.
EDIT: I compared a desktop GPU to a mobile CPU. So to make it more fair, GTX 680 = 3.5 billion transistors vs 2.7 billion with the i7 3960x. Those are the cream of the current crop. -
Like always, it boils down to money. Larger die size = less cores per silicon wafer = less money.
-
HopelesslyFaithful Notebook Virtuoso
um from what i remember its cheaper to make a CPU on a smaller nm...smaller the die more you can fit on a single wafer. So a 22nm with 1bil transistors compared to a 32nm with 1 bil wafers is cheaper because you can fit more CPUs on a single wafer so your saving money. You are getting more cpu's per wafer which means you can sell more.
Also the more transistors you have on a larger die size means the more expensive it is to make.
One of my theories is that intel buys out the FABs where they build units first. Lets say TSMC just build a 22nm FAB....my guess is intel puts the money out first to get access to that FAB before the graphics card companies can get their hands on it....or out bids them
EDIT: reread what you said hal and it would make more sense to shrink the die because you would have a higher yield per wafer which would save you money -
H.A.L. 9000 Occam's Chainsaw
-
It's because Intel's fabs are well ahead of the game. If they made dedicated GPUs, it would probably be on 22nm right about now. But since AMD and Nvidia has to rely on TSMC, the best they have right now is 28nm.
-
Yes, Intel is ahead of TSMC/SMIC/UMC/Samsung/etc in process technology, and since AMD and Nvidia are both fabless, they rely on TSMC's process.
My dad would be much more adept at explaining these difficulties and the disparity between Intel and TSMC, since process technology is the main basis of his job.
TSMC can afford to do the same because they have volume: many, many fabless chip companies go to TSMC with designs that they want TSMC to follow to produce a chip. TSMC meanwhile uses its capital to develop and build a new fab on a new process.
-
There was one of the famous quote by Sanders(AMD founder), "real man have fabs". Going fabless made them no longer a threat. -
H.A.L. 9000 Occam's Chainsaw
EDIT: IBM only has 2 fabs. TSMC has 14. -
GloFo as we know is the spin off of AMD's fab business. -
How stupid ppl like me think on this topic:
You must have 1 cpu per computer.
You don't have to have a dGPU on a computer.
Simple -
-
-
H.A.L. 9000 Occam's Chainsaw
They can only compete with Intel on the server front because their architecture is superior in a server/cluster environment. Blades with Power7 are what real business is done on. IBM couldn't and still can't keep up with Intel's fabs though. It was the whole reason Apple left them, because they couldn't hit the deadlines needed to get TDP under control for the speeds required.
At the time, though, IBM's CPU's showed up Intel's offerings in many different ways. Mainly vector performance. AltiVec was several times more powerful than what Intel offered. It's just that the silicon at the time ran as hot as the surface of the sun.
Question...why do GPUs lag behind CPUs in die size?
Discussion in 'Hardware Components and Aftermarket Upgrades' started by HopelesslyFaithful, Apr 22, 2012.