The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    why 65nm CPU but still 90nm GPU?

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by hmmmmm, Sep 10, 2006.

  1. hmmmmm

    hmmmmm Notebook Deity

    Reputations:
    633
    Messages:
    1,203
    Likes Received:
    0
    Trophy Points:
    55
    can anyone tell me why the gpu transiters are still using 90nm processing methods and are only just about to move to 80nm when it's already possible to manufacture 65nm cpus?

    i mean, whats the difference between the physcial manufacturing process of cpus and gpus that causes gpu transistor size to lag behind?

    wouldn't 65nm gpu be awesome, less power consumption and more power

    also, can some one explain to me why it's impossible to use the second core of a duo core to help out the gpu?

    thx
     
  2. qwester

    qwester Notebook Virtuoso

    Reputations:
    366
    Messages:
    2,755
    Likes Received:
    0
    Trophy Points:
    55
    A few years back GPUs used to be in the lead when it came to manufacturing process. I guess the main reason that CPUs are now in the lead is the larger number of transistors on a CPU core compared to those on a GPU. Or maybe CPU manufacturing is more optimized, I think that GPU yields are lower than CPU yields even with the current different manufacturing processes. (Yield refers to how many 'functional' cores are obtained per certain area of silicon wafer).

    As for why not use the 2nd core to help the graphics. If a CPU and GPU were to communicate to collaborate on a task, the data traffic will go thru the bus, which is extremely slow when you consider how fast graphic computations are, PLUS this bus is also used for other data transfers (1st core and ram/chipset). Add to that CPUs are not optimized (commands) for graphic computations (unlike GPUs), it would be like using software rendering, which can never approach dedicated 3D rendering.
     
  3. hmmmmm

    hmmmmm Notebook Deity

    Reputations:
    633
    Messages:
    1,203
    Likes Received:
    0
    Trophy Points:
    55
    thanks quester

    your explantion on why cpu can't be used as gpu is very clear

    though i'd like a better answer for why gpus can't be made with the 65 nm process

    the cpus have more transistors cause they can fit more into the same area cause they made the transistors smaller with the 65nm process.

    so why can't they make gpu transistors smaller too?

    imagine having the performance leap of p4 to C2D on a gpu

    man that would be so sweet for notebook users
     
  4. Jalf

    Jalf Comrade Santa

    Reputations:
    2,883
    Messages:
    3,468
    Likes Received:
    0
    Trophy Points:
    105
    GPU's have never been in the lead with manufacturing process.
    There are several reasons.
    - One is the incredibly fast upgrade cycles GPU's uses (They basically have to come up with an entirely new architecture every year). That means they don't have time for the same amount of fine-tuning and tweaking. CPU's are more or less built from the ground up, whereas GPU's are designed using a lot of ready-made "blocks" for different parts of the chip. Saves a lot of time, but it's a bit less efficient, and it only works if these blocks are available for the process size you use)
    - They tend to use more transistors (The C2 Duo hits around 300M transistors. GPU's broke that barrier a year or two ago), which means a bigger, more complicated die, which means lower yields. If they were to attempt that with the latest and greatest, like 65nm, which is already difficult to get good yields on, they'd be in trouble.
    - Finally, ATI and NVidia don't manufacture the chips themselves, they outsource that to 3rd party foundries, which means that 1) they have to write chip designs that suit their specs, and they're usually lagging a bit behind AMD/Intel, and 2) It's just harder for them to go back to the drawing board, make a few tweaks, try manufacturing the new design and so on. They really need to get it working in one of the first few tries.

    Because of the above reasons, switching to newer processes earlier would force them to 1) spend a few months extra debugging and tweaking to get the thing running (that's acceptable on a CPU that's going to be on the market for the next 4 years, but less so for a GPU that's going to be obsolete in 6 months from now), 2) Accept lower yields (which would partially be offset by the smaller chips allowing them to produce more), and 3) they'd have to find somewhere to get them manufactured. The foundries they usually use only offer 65nm for small simple chips atm, afaik. And I doubt Intel has the capacity to spare. I doubt they'd want to do the GPU manufacturer's work either. IBM or Sony might be able to do it, but they're busy with the Cell and all sorts of other chips. And of course, wherever they go, the foundry is going to ask a higher price for producing 65nm chips than for 90. ;)

    That's a bad comparison for two reasons. There are 65nm P4's too, and the performance jump between 90 and 65 nm P4's isn't big.
    Second, the performance jump from P4 to C2D was at most 30%. GPU's typically get close to 80-100% more performance every year
    So no, getting the performance leap of p4 to C2D on a GPU would suck... ;)
    And in any case, switching to 65nm wouldn't give you that performance leap.
     
  5. hhjlhkjvch

    hhjlhkjvch Notebook Guru

    Reputations:
    12
    Messages:
    71
    Likes Received:
    0
    Trophy Points:
    15
    They can't use 65nm for GPUs yet simply because the chip manufacturers haven't adopted that technology yet. Intel has, but Intel's got a huge amount of money to spend on it.

    GPUs are at 80nm at the moment; AMD's moving to 65nm sometime soon. There is speculation that some of ATI's top-end products may be produced using AMD's 65nm facilities (since they're the same company now anyway). As others have said, there are also issues with the number of transistors. While Conroe may have 300,000,000 transistors, most of those are for cache (which is very easy to design and has very good yields). GPUs have virtually no cache, so the actual complexity is much higher. Combining a very complex GPU with a new, low-yielding manufacturing process would result in almost no working GPUs.

    CPUs can't be used as GPUs largely because they're not specialised enough. They can do the work, but it'll take too long. Rendering a single frame of a modern game with a Core 2 Duo would take a few seconds at least - so you'd be getting a fraction of a frame per second drawn to the screen. The CPU could help a GPU, but it simply couldn't help enough to make it worthwhile.

    As above, the performance gain from P4s to Core 2 Duos is nothing compared to what GPUs manage. P4s have been available for six years now - we've been using that basic technology for six years continually. Now think about where GPUs have done in that time. In 2000 (when the P4 was released), the Geforce 2 and ATI Radeon were top-end video cards. We've seen the addition of DX8 support and DX9 support with hugely complex shaders (while CPUs have managed to add HT, SSE, SSE2, and SSE3); GPUs no longer have four pipelines - they've got 24 (or 48, if you count each ALU on an X1900 as a pipeline). Each of those pipelines is far more powerful and runs at a far higher clock speed than a pipeline in a GF2 or Radeon (in fact, each of those pipelines is almost certainly far more powerful than the entire GF2 or Radeon GPU).
     
  6. Jalf

    Jalf Comrade Santa

    Reputations:
    2,883
    Messages:
    3,468
    Likes Received:
    0
    Trophy Points:
    105
    Oh right, a little history lesson to add. Part of the reason NVidia got burned with the Geforce FX series was that they tried to move to a new process before ATI. (was it 130nm? 110? Something like that, anyway)

    And their yields went down the drain. They were unable to manufacture large quantities of the chips to begin with, causing big delays, and the chips they did make didn't perform as well as they'd hoped. And of course, they were more expensive to make too, because of all this.

    ATI played it safe, using an older process, and could easily churn out tons of cheap, powerful chips, on time.

    Intel had a bit of the same when they moved to 90nm with the Prescott P4.
    Power consumption went up, and performance pretty much stood still.

    Going to a smaller process isn't some miracle cure that magically improves performance. It's a tradeoff, at some point, it's mature enough to be profitable. But when is that? Depends on whether you're Intel or ATI, and whether you're designing chips with a lfiecycle of 6 months or 4 years. :)
     
  7. hmmmmm

    hmmmmm Notebook Deity

    Reputations:
    633
    Messages:
    1,203
    Likes Received:
    0
    Trophy Points:
    55
    doh

    thanks a lot

    wow, gpu designers have it tough

    but still, if they were to create a 65 nm gpu, at least it would lower power consumption which would be good news to laptop owners as one of the reasons i didn't go for the x1600 is cause that reduces battery life a lot faster then the x1400 even though the x1400's performace is 40% that of x1600.

    to some people, battery life is important

    EDIT: Thanks jalf and SLATYE!!!
     
  8. Jalf

    Jalf Comrade Santa

    Reputations:
    2,883
    Messages:
    3,468
    Likes Received:
    0
    Trophy Points:
    105
    Remember as I said above, the Prescott. Sure, it used 90nm where its predecessors were 130, and yet, it used *more* power. ;)

    The Geforce FX suffered from the same. Sure, they used a smaller process than the Radeon 9800 series, but they still ran a lot hotter.

    In general, a smaller process allows *potentially* lower power consumption, but not always, and only when the process is somewhat mature. Early samples are likely to require higher voltage, meaning more heat, just to make things work.
     
  9. Charles P. Jefferies

    Charles P. Jefferies Lead Moderator Super Moderator

    Reputations:
    22,339
    Messages:
    36,639
    Likes Received:
    5,080
    Trophy Points:
    931
    I don't remember that Intel was ever able to explain why that happened.
     
  10. Jalf

    Jalf Comrade Santa

    Reputations:
    2,883
    Messages:
    3,468
    Likes Received:
    0
    Trophy Points:
    105
    Two main reasons, as far as I know:

    First, they took a bad CPU architecture and made it worse. Northwood shared the same basic flaws as Prescott, but it was a much more moderate design, meaning that the penalty for these flaws were more moderate as well.
    The Prescott architecture is just much more power hungry than Northwood, regardless of process size. They extended the pipeline from an already too long ~22 stages, to an insane 31. By comparison, Core 2 Duo and Athlon 64 both have 15-18.
    Longer pipelines basically allow higher clock speeds, but introduce new inefficiencies (regardless of the clock speed), and increasing power consumption. In the case of Prescott, the increase in power consumption, prevented them from reaching the high clock speeds they'd expected, so they were just stuck with the downsides of the long pipeline: Inefficient execution, and lots of heat output.

    And second, a general problem at smaller processes is that you get more leakage. With today's itty bitty transistors, electricity just leaks out everywhere, basically, and turns into heat. And to counter this, you have to apply more voltage than you would otherwise, which *also* means more heat. (Of course, the basic advantage of smaller processes still holds, smaller transistors means your signal has to travel shorter distances, allowing more speed and requiring less power. Until recently, that was really the "main" effect of smaller processes, the only one people knew and cared about, but in the last couple of years, leakage has turned out to increase dramatically at the smallest process sizes. If Intel had been aware of this in the late 90's, they'd probably never have made the NetBurst architecture. But back then, leakage looked like some tiny microscopic effect that no one would ever care about)

    Of course both Intel and AMD have techniques for minimizing this leakage, but in the case of Prescott, the 90nm process was all new and untested, and most likely, they just hadn't been aware it'd be such a big problem. (At least not until it was too late)

    Edit: Yes, I'm probably rambling here... :p
     
  11. hhjlhkjvch

    hhjlhkjvch Notebook Guru

    Reputations:
    12
    Messages:
    71
    Likes Received:
    0
    Trophy Points:
    15
    ATI was on 150nm; Nvidia thought they could do 130nm.

    Did yields really go down the drain, or was it just that the Geforce FX was never designed to be as fast as the R300? From what I could see, it looked like the FX was designed to be just a tiny bit faster than the GF4 and with DX9 - but then to compete with R300, Nvidia had to overclock it a long way (which then resulted in production problems and the huge heat issues they encountered).

    EDIT: It's also worth noting what happened when ATI went to 130nm. They waited a while (using the 150nm R300 GPUs for their entire range) and then released the 130nm RV350 models (9600 series). Unlike Nvidia's cards, ATI's ones worked perfectly because they were a bit more patient.
     
  12. Jalf

    Jalf Comrade Santa

    Reputations:
    2,883
    Messages:
    3,468
    Likes Received:
    0
    Trophy Points:
    105
    A mix, probably. the FX was awfully complex for its time, more so than the R300 series, and on paper, at least, more advanced shader support, and all of this seems to have come at a cost of raw performance. (So while it theoretically can do a few things the R300 couldn't, it was too slow to really take advantage of it)
    But remember the FX series was delayed something like 6 months to begin with, and even the original FX cards ran awfully hot, even before they started overclocking with the later revised models. They did have lots of problems with yields, at least for the first months.
    And of course, once they got the initial problems sorted out, the need to overclock to compete with ATI didn't improve matters, like you said... ;)
     
  13. ChangFest

    ChangFest Notebook Consultant

    Reputations:
    19
    Messages:
    259
    Likes Received:
    0
    Trophy Points:
    30
    Very interesting and informative thread. Thanks for posting the information.
     
  14. particleman

    particleman Notebook Enthusiast

    Reputations:
    0
    Messages:
    27
    Likes Received:
    0
    Trophy Points:
    5
    The other reason why CPUs are at 65nm and GPUs are still at 90nm is simply because Intel has the best fabs in the world. Intel puts more money than any other company into chip fabrication R&D. AMD and every other chip producer in the world is always a bit behind. TSMC who produces chips or ATi and nVidia is also not on par with Intel in terms of manufacturing either. Chipset foundrys are a very expensive proposition, that is why it is no surprise that it is a giant company like Intel that is always in the forefront.

    I remember reading the reason why half nodes (ie 80nm, 110nm processes) aren't used on CPUs but I can't remember it.
     
  15. Pitabred

    Pitabred Linux geek con rat flail!

    Reputations:
    3,300
    Messages:
    7,115
    Likes Received:
    3
    Trophy Points:
    206
    And AMD still keeps up in performance for the most part, even at 90nm ;) Shows a lot about the chip engineers at AMD.