The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    Question...why do GPUs lag behind CPUs in die size?

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by HopelesslyFaithful, Apr 22, 2012.

  1. HopelesslyFaithful

    HopelesslyFaithful Notebook Virtuoso

    Reputations:
    1,552
    Messages:
    3,271
    Likes Received:
    164
    Trophy Points:
    0
    Question...why do GPUs lag behind CPUs in die size? They seem to milk the crap out of a die size.
     
  2. wild05kid05

    wild05kid05 Cook Free or Die

    Reputations:
    410
    Messages:
    1,183
    Likes Received:
    0
    Trophy Points:
    55
    that's a good question
     
  3. H.A.L. 9000

    H.A.L. 9000 Occam's Chainsaw

    Reputations:
    6,415
    Messages:
    5,296
    Likes Received:
    552
    Trophy Points:
    281
    Well, Intel has fooled everyone into thinking that moving down die sizes is easy. It's not. It's very difficult and very expensive.

    Difficult and expensive aren't things investors want to hear. So when they actually DO move down a die size, they make sure to milk it for all it's worth.

    Plus you have to remember, there's more transistors on a high-end GPU than there are on a current high-end CPU. 3.5 billion for the GTX 680 vs 995 million for the 2960XM. More transistors = larger physical die = more $$$. Also since the dies themselves are so large, when you have bad yields, that becomes VERY costly VERY quick.

    EDIT: I compared a desktop GPU to a mobile CPU. So to make it more fair, GTX 680 = 3.5 billion transistors vs 2.7 billion with the i7 3960x. Those are the cream of the current crop.
     
  4. R3d

    R3d Notebook Virtuoso

    Reputations:
    1,515
    Messages:
    2,382
    Likes Received:
    60
    Trophy Points:
    66
    Like always, it boils down to money. Larger die size = less cores per silicon wafer = less money.
     
  5. HopelesslyFaithful

    HopelesslyFaithful Notebook Virtuoso

    Reputations:
    1,552
    Messages:
    3,271
    Likes Received:
    164
    Trophy Points:
    0

    um from what i remember its cheaper to make a CPU on a smaller nm...smaller the die more you can fit on a single wafer. So a 22nm with 1bil transistors compared to a 32nm with 1 bil wafers is cheaper because you can fit more CPUs on a single wafer so your saving money. You are getting more cpu's per wafer which means you can sell more.

    Also the more transistors you have on a larger die size means the more expensive it is to make.

    One of my theories is that intel buys out the FABs where they build units first. Lets say TSMC just build a 22nm FAB....my guess is intel puts the money out first to get access to that FAB before the graphics card companies can get their hands on it....or out bids them


    EDIT: reread what you said hal and it would make more sense to shrink the die because you would have a higher yield per wafer which would save you money
     
  6. H.A.L. 9000

    H.A.L. 9000 Occam's Chainsaw

    Reputations:
    6,415
    Messages:
    5,296
    Likes Received:
    552
    Trophy Points:
    281
    Where you save money with die shrinks, you lose it in R&D. Serious money and talent has to go into the designs of smaller dies. It's not just a shrinkage of the transistors. Plus yield issues... every time you step down a die size, you have to remaster the process. In the beginning it's very expensive.
     
  7. GTRagnarok

    GTRagnarok Notebook Evangelist

    Reputations:
    556
    Messages:
    542
    Likes Received:
    45
    Trophy Points:
    41
    It's because Intel's fabs are well ahead of the game. If they made dedicated GPUs, it would probably be on 22nm right about now. But since AMD and Nvidia has to rely on TSMC, the best they have right now is 28nm.
     
  8. MidnightSun

    MidnightSun Emodicon

    Reputations:
    6,668
    Messages:
    8,224
    Likes Received:
    231
    Trophy Points:
    231
    Yes, Intel is ahead of TSMC/SMIC/UMC/Samsung/etc in process technology, and since AMD and Nvidia are both fabless, they rely on TSMC's process.

    A "die shrink" isn't as simple as many seem to think it is. With every shrink, there's tons and tons of testing and yield optimization to be done. While theoretically, a chip produced using a 22 nm process would require less raw material than one produced using a 28 nm process, the real cost isn't there: it's in the yield, as H.A.L. 9000 did a great job of explaining. When you move to a new process, the functioning/dead chip ratio is low.

    My dad would be much more adept at explaining these difficulties and the disparity between Intel and TSMC, since process technology is the main basis of his job.

    Haha, no, that's not how it works at all. Intel builds its own fabs at tremendous cost (which is why they're basically the only ones who can afford to do so), with brand new technology, for every new process. It can't just go out and buy a fab that's for sale.

    TSMC can afford to do the same because they have volume: many, many fabless chip companies go to TSMC with designs that they want TSMC to follow to produce a chip. TSMC meanwhile uses its capital to develop and build a new fab on a new process.

    No, see above. The cost is not only raw materials; you "pay" for every dead chip as well.
     
  9. chimpanzee

    chimpanzee Notebook Virtuoso

    Reputations:
    683
    Messages:
    2,561
    Likes Received:
    0
    Trophy Points:
    55
    Even at the time AMD had its fabs, it still lag and need some help from the mighty IBM(said to be the only one who may be on par with Intel) :)

    There was one of the famous quote by Sanders(AMD founder), "real man have fabs". Going fabless made them no longer a threat.
     
  10. H.A.L. 9000

    H.A.L. 9000 Occam's Chainsaw

    Reputations:
    6,415
    Messages:
    5,296
    Likes Received:
    552
    Trophy Points:
    281
    IBM is partnered with TSMC and Samsung. Matter of fact, they also do business with GloFo, which brought a new fab online in January in upstate New York. GloFo's fab is 28NM. IBM is still using 45NM at East Fishkill, I believe.

    EDIT: IBM only has 2 fabs. TSMC has 14.
     
  11. chimpanzee

    chimpanzee Notebook Virtuoso

    Reputations:
    683
    Messages:
    2,561
    Likes Received:
    0
    Trophy Points:
    55
    I was talking about the time when AMD needs IBM's SOI to compete with Intel(when TSMC/UMC wasn't even a concern between them), haven't followed this field for over a decade and IBM nowadays have basically left hardware(or even software) ?

    GloFo as we know is the spin off of AMD's fab business.
     
  12. baii

    baii Sone

    Reputations:
    1,420
    Messages:
    3,925
    Likes Received:
    201
    Trophy Points:
    131
    How stupid ppl like me think on this topic:

    You must have 1 cpu per computer.
    You don't have to have a dGPU on a computer.

    Simple :)
     
  13. chimpanzee

    chimpanzee Notebook Virtuoso

    Reputations:
    683
    Messages:
    2,561
    Likes Received:
    0
    Trophy Points:
    55
    You have a point. Basically, only Intel has the cashflow(volume) to afford the ever increasing fab cost. One of the reason why HP partnered with them for the Itanic was because of fab cost.
     
  14. MidnightSun

    MidnightSun Emodicon

    Reputations:
    6,668
    Messages:
    8,224
    Likes Received:
    231
    Trophy Points:
    231
    IBM has found that it's repeatedly missed the boat when it came to profiting off hardware (Thinkpads come to mind--IBM never turned a significant profit off the division, while Lenovo's raking in the cash; another example is their HDD business, sold to Hitachi), so it's increasingly focused on services, which are highly profitable. It still invests heavily in R&D, though, adding to its formidable intellectual wealth.
     
  15. H.A.L. 9000

    H.A.L. 9000 Occam's Chainsaw

    Reputations:
    6,415
    Messages:
    5,296
    Likes Received:
    552
    Trophy Points:
    281
    That, and Intel ran them out of the hardware business... mostly.

    They can only compete with Intel on the server front because their architecture is superior in a server/cluster environment. Blades with Power7 are what real business is done on. IBM couldn't and still can't keep up with Intel's fabs though. It was the whole reason Apple left them, because they couldn't hit the deadlines needed to get TDP under control for the speeds required.

    At the time, though, IBM's CPU's showed up Intel's offerings in many different ways. Mainly vector performance. AltiVec was several times more powerful than what Intel offered. It's just that the silicon at the time ran as hot as the surface of the sun.