The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    Memory on Nvidia Chips

    Discussion in 'Gaming (Software and Graphics Cards)' started by Dragauss, Mar 9, 2011.

  1. Dragauss

    Dragauss Notebook Geek

    Reputations:
    0
    Messages:
    80
    Likes Received:
    1
    Trophy Points:
    15
    As far as I know, GPU memory is used towards handling increasing resolutions. However, it looks like the newer GPU are using DDR3 memory and not GDDR3/5 at least for the mid/low range chips. Also, the bus is smaller as well mostly 128-bit to 192-bit.

    Can anyone shed some light on the performance impacts, say hooking up to a 1080 monitor? I would love if they would use GDDR5, that would be fine, but DDR3 seem gimping especially on a 128-bit bus.
     
  2. oldstyle

    oldstyle Notebook Consultant

    Reputations:
    36
    Messages:
    100
    Likes Received:
    0
    Trophy Points:
    15
    Here is a link you can compare different cards. This link you can compare different cards and gaming performance.

    When you ask the impact of hooking up to a 1080 monitor what do you mean? Games, movies standard applications. Games and 3D applications are the only ones where the cards would likely have lower performance not movies or video.

    With DDR3/GDDR3 Clocks/MHzx2 (DDR). divide bus by 8 gives you Bytes. Multiply them.

    So a HD5730m 800MHz and 128 bit bus. 800x2=1600 (DDR), 128/8=16, 1600x16=25,600MB/s=25.6GB/s

    GDDR5 is the same except you double the memory speed twice not once like with DDR3/GDDR3. So my HD5870m 1000MHz and 128bit bus. 1000x2=2000 (GDDR5),2000x2=4000 (DDR), 128/8=16, 4000x16=64000MB/s=64GB/s.

    Throw in the mix also. 5730 has 400SP and 650MHz core clocks. 5870 800SP and 700MHz core clocks.

    I think GDDR3/GDDR5 have the ability to instantly flush/dump that DDR3 does not. But for the most part the bus and the clocks more is better SP/core being the same.
     
  3. kent1146

    kent1146 Notebook Prophet

    Reputations:
    2,354
    Messages:
    4,449
    Likes Received:
    476
    Trophy Points:
    151
    There are two questions on the table:

    (1) Does having higher bandwidth / faster VRAM have an impact on graphics performance?
    (2) How much impact does that have?
    (3) What does that mean to me when selecting a laptop GPU?



    Answers:

    (1) Yes, of course. GDDR5 is going to have higher bandwidth than GDDR3. And in bandwidth-intensive applications, GDDR5 will perform better than GDDR3 for that reason.

    (2) Hard to answer, because you don't get the option to choose what kind of VRAM gets packaged with a GPU. An nVidia GTX260/280-series card comes packaged with GDDR3. End of story. An nVidia GTX460/480-series card comes packaged with GDDR5. End of story. You cannot "choose" the kind of VRAM that goes with your GPU. It just comes with whatever it comes with.

    The closest you can come to measuring how much difference memory bandwidth makes is to overclock the memory clock on your GPU. It is the only way to get an apples-to-apples comparison, because you are making direct comparisons between two test subjects, where the only difference is memory clock (and therefore memory bandwidth).

    But that may not be relevant to the real-world. The impact of VRAM type (and memory bandwidth) will vary from one application/game to another, depending on how much that application/game depends on memory bandwidth. So there is no universal answer or single measurement / number that you can use that would apply to all situations.


    (3) What does this mean? Not much. When you pick a graphics platform for a laptop, you need to evaluate the performance of the graphics platform as a whole. Evaluating the performance of a single component (just the VRAM) within that graphics platform doesn't help you make a purchasing decision, especially since you don't have the freedom to swap out that single component (VRAM) even if you wanted to.
     
  4. spaghetticheese

    spaghetticheese Notebook Smasher

    Reputations:
    150
    Messages:
    747
    Likes Received:
    0
    Trophy Points:
    30
    Looks like 3 questions to me :)
     
  5. jacob808

    jacob808 Notebook Deity

    Reputations:
    52
    Messages:
    1,002
    Likes Received:
    0
    Trophy Points:
    55
    well, don't know about memory bandwith or if it refers to how many "bits" a card would be considered, but I do know that through history, we gauged how powerful a system was by how many bits it was referred to as.

    An example would be the the video game consoles, each succeeding generation was more powerful and doubled the "bits" i.e 16 bit< 32 bit< 64 bit< 128 bit< and so on and so forth.

    Anyway, this makes me confused. So if the GTX 460M has a 192 memory bandwith, would that make the card a 192 bit GPU?
     
  6. jeremyshaw

    jeremyshaw Big time Idiot

    Reputations:
    791
    Messages:
    3,210
    Likes Received:
    231
    Trophy Points:
    131
    don't use CPU architecture terms to describe GPU subarchitecture terminology. :p

    CPU 64bits is talking about overall execution address width.

    GPU 192bits is NORMALLY talking about memory bus, which is tempered with memory choice (i.e, 128bit GDDR5 800MHz = 256bit GDDR3 800MHz).
     
  7. jacob808

    jacob808 Notebook Deity

    Reputations:
    52
    Messages:
    1,002
    Likes Received:
    0
    Trophy Points:
    55
    yeah just read up on it. So there really isn't a 128 bit system yet then?
     
  8. Dragauss

    Dragauss Notebook Geek

    Reputations:
    0
    Messages:
    80
    Likes Received:
    1
    Trophy Points:
    15
    Let me try to simplify this lol. I just had one question really.

    Simply, does having DDR3 on a 128-bit bus (25.6 GB/s Bandwidth) will affect it's performance at higher resolutions such as 1080p?
     
  9. Harleyquin07

    Harleyquin07 エミヤ

    Reputations:
    603
    Messages:
    3,376
    Likes Received:
    78
    Trophy Points:
    116
    Answer to simple question: Yes in combination with most cards of the type having less powerful inherent "horsepower" to render graphics at such high resolutions.
     
  10. Botsu

    Botsu Notebook Evangelist

    Reputations:
    105
    Messages:
    624
    Likes Received:
    0
    Trophy Points:
    30
    Yes. 128 bits + DDR3 is evil in general. Not so much on low-end cards but the more potential the GPU has (roughly determined by the number of cores & clocks) the more low bandwidth will cripple its performance.
     
  11. Dragauss

    Dragauss Notebook Geek

    Reputations:
    0
    Messages:
    80
    Likes Received:
    1
    Trophy Points:
    15
    That was the same feeling I had. I was looking towards the 540m, but most of them are paired with DDR3. Hopefully someone comes out with a GDDR5.
     
  12. jeremyshaw

    jeremyshaw Big time Idiot

    Reputations:
    791
    Messages:
    3,210
    Likes Received:
    231
    Trophy Points:
    131
    depends, the FPU on AMD K10.5 (Phenom II, Athlon II) CPU are 128bit... I dunno about the setup resources leading in, but at least the execution units are :p
     
  13. Zeptinune

    Zeptinune Notebook Evangelist

    Reputations:
    81
    Messages:
    310
    Likes Received:
    0
    Trophy Points:
    0
    The difference between DDR3 and DDR5 graphics memory is not that high, you still have to take into account the make bandwidth and what-not. In most games you wont see a significant improvement. It would just be better to get a better card.. as for the Gt 540M you probably wont really find any that are paired with DDR5 memory.. It's not cost/performance effective when compared with just using a better card.
     
  14. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    Is it cost/performance effective to pair GT 540M with 1GB of GDDR3?
    No, because that will not increase it's performance in the slightest, and the manufacturer expended extra cost on another 512MB VRAM for nothing.

    Is it cost/performance effective to pair GT 540M with 512MB GDDR5?
    Yes, because performance gains (at resolutions of 1366x768 or 1440x900) would be in the rough range of 35% to 50% due to increased memory bandwidth (when compared to GDDR3 performance at same resolutions).
    Games at those resolutions would not be able to use more than 512MB of VRAM in an effective capacity.
    The gpu would still be limited by it's core hardware of course (amount of shaders, etc... however the GDDR5 WOULD have a noticeable effect on performance).

    Therefore, the manufacturer would be better of having 512MB of VRAM on the GPU, but putting GDDR5 for increased bandwidth (which in terms of costs, at least to the manufacturer, probably comes out the same as if they put 1GB on the gpu of GDDR3).
    THEN you would be able to call those 'mid-range gpu's' (at least for 2010/2011).
    And high performance gpus would have more shaders, better clocks, double the bus and GDDR5 (so they can perform properly at higher resolutions than what I mentioned above).

    But this is capitalism/consumerism we're talking about.
    The manufacturer is not concerned about giving you the best. Their goal is to sell you less for more profit.
    1GB of VRAM on a mid-range mobile gpu is nothing more than a gimmick.
     
  15. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    The truth is, today's "mid-range" mobile GPUs don't even register, on the desktop side.

    Low-end is now the 9800 GT or 5750, and bare minimum would be something like the 9600 GT. The 540M and Mobility 5650 can't even defeat the latter, or its younger brother, the 192-bit 9600 GSO.

    This is a truth we don't like to state, on NBR, but it is what it is.
     
  16. masterchef341

    masterchef341 The guy from The Notebook

    Reputations:
    3,047
    Messages:
    8,636
    Likes Received:
    4
    Trophy Points:
    206
    comparing laptop graphics to desktop graphics is a pretty critical mistake.

    it's a different market, and they have different ranges of what constitutes low-end, high-end.

    would you want to make the argument that there is no point talking about differences in smartphone graphics power, because all smartphones can't compare to desktop chips in pure power?

    laptops have lower resolutions overall than desktops, they don't need as much horsepower on average, they have much more strict considerations as far as size, power, heat, compared to desktops.

    smartphones have even more strict requirements than laptops. however, smartphones also have less horsepower requirements. my point is that there is a purpose to these mobile chips. they aren't meant to be compared to desktop chips because they have different critical criteria by which we compare them.

    there are new smartphone-class processors and graphics cards that are really impressive, and no one expects them to compete with desktops, and they are important anyway. apple's a5, arm, motorola, android phones...

    similarly, what if I could manufacture chips that had 4-5x the graphics power of an average desktop and at the same cost of a desktop part - but the size was 30 ft x 30 ft x 90 ft. It may not be relevant to you anymore, because it wouldn't even be mobile enough to move inside your house. Mobility is becoming a critical deciding factor more and more. You have to factor it in when comparing graphics cards. A 9600 GT isn't really any more mobile than an ATI 6850, but they are both much less mobile than any laptop graphics card.
     
  17. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    Desktop and Notebook graphics cards are in the same category, because they are a part of the same platform, and video game system requirements do not discriminate between the two. The same goes for CPUs. It's a critical mistake to not look at desktop performance, because that is where developers are polishing their games.

    When The Witcher 2's minimum requirement says "8800 GT/9800 GT", you'd better know that the GTX 260M is a 9800 GT, just downclocked, and that any Nvidia GPU lower than that is severely lacking. Sorry, but that's the truth.

    Remaining willfully ignorant of where mobile GPUs stand, in the grand scheme of PC gaming, does not help you one bit. All PC GPUs, whether mobile or desktop, are on the same chart of hierarchy.

    EDIT: Everything else you're talking about is nonsensical. Phones are their own closed platforms, with specific software tailored to them. They have zero bearing, in this discussion.
     
  18. Zeptinune

    Zeptinune Notebook Evangelist

    Reputations:
    81
    Messages:
    310
    Likes Received:
    0
    Trophy Points:
    0
    Well if that's true then I suppose you can also explain the fact why I have not found a single GT 540M with GDDR5 memory? I've looked around, especially on notebook check, performance comparisons are minimal at best.
     
  19. Dragauss

    Dragauss Notebook Geek

    Reputations:
    0
    Messages:
    80
    Likes Received:
    1
    Trophy Points:
    15
    There probably won't be much of a difference in performance from a 1366*768 res which most people will be using it for. But say, hook it up to a 1920*1200 monitor, the slow memory might not do so hot. I haven't seen an recent benchmarks or anything of the like to see what kind of performance loss there is. Right now it's just discussions.
     
  20. masterchef341

    masterchef341 The guy from The Notebook

    Reputations:
    3,047
    Messages:
    8,636
    Likes Received:
    4
    Trophy Points:
    206
    It's really not the same category. Game requirements shift drastically across resolution. Notebook resolutions are usually somewhere between 1280x800 - 1600x900. Desktops pretty much start at 1920x1080 and go up. That's how I see it.

    Besides that, there still isn't much really a point of debate here, unless you are just complaining to the industry. At the end of the day, if you need a notebook and want to play games on it, your options will be dictated by the industry offerings and your budget. I don't really see the point of going on about how laptop graphics are pointless because they are less powerful than desktop graphics. If it bothers you, don't buy a laptop. Did I miss something? Is there a debate here or not? I'm guessing not.

    I'm pretty sure we're arguing over whether or not it's pertinent to take the stance of complaining about how underpowered mobile graphics are compared to desktop graphics. I obviously agree mobile graphics are much less powerful. I do not see a point in complaining to anyone about it.
     
  21. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    Who's complaining, and when did I use the word pointless? It's like you read nothing I typed.

    I said nothing which was factually incorrect, so yeah, I guess we're done here.

    Let's not hold any more discourse.
     
  22. Zeptinune

    Zeptinune Notebook Evangelist

    Reputations:
    81
    Messages:
    310
    Likes Received:
    0
    Trophy Points:
    0
    Chances are people aren't going to hook it up to an external monitor to game if they have bought a mid range card.. which is what retailers are thinking, which is why they don't care, which is why you should just get a Desktop if you want to game on a big screen and a laptop if you are happy with 1366x768. Retailers wont put ddr5 on a card, not even 512mb because as someone else explained they don't care and it's been proven that it wont really help the people at their native rez anyway.

    Oh and btw, just for arguments sake I think desktops start at 1600x1050 still.
     
  23. jacob808

    jacob808 Notebook Deity

    Reputations:
    52
    Messages:
    1,002
    Likes Received:
    0
    Trophy Points:
    55
    +1

    This is so true, and I'm glad you pointed this out. If both desktop and laptop play the same videogame, you would want to know where you stand as far as requirements are concerned. Not many people know the difference between the two. It shouldn't be this confusing. Great point made Kevin.
     
  24. Deks

    Deks Notebook Prophet

    Reputations:
    1,272
    Messages:
    5,201
    Likes Received:
    2,073
    Trophy Points:
    331
    Nvidia (like any big corporation) is greedy and wants to get as much money as possible from the consumer by betting on their lack of knowledge.
    They got away with rebranding exactly the same gpu for 2 years, merely increasing it's clocks and reducing the manufacturing process.

    You think the notebooks are incapable of having a gpu comparable to a desktop one in terms of capabilities?
    Wrong.
    They can, the manufacturer on the other hand is not interested in doing that.
    If they did it, they'd be stuffing desktop power/capabilities into a laptop of that same hardware generation, and they'd be losing profits in the desktop share because people are shifting more and more to the mobile segment.

    They artificially inflate prices of mobile components intentionally because of various aspects: brand name and mobility.

    This is capitalist/consumer based society.
    The manufacturers aren't being 'innovative' with what they put out into the market.
    They already developed it some time ago and are waiting for a time to put it out and sell, because if they put out the most powerful/advanced hardware they have in their arsenal now, they'd be shooting themselves into a foot from a financial point of view.
    They make much more money/profit from selling minor revisions and speed bumps of existing techs instead of giving you actual innovative leaps in technologies.

    Nvidia is generally lacking in the gpu sector behind AMD, almost to the point like AMD is lacking behind Intel in the cpu department.
    To my knowledge, AMD mid-range offerings are actually better than Nvidia's because the shaders are twice as better compared to nvidia, and overall performance is higher despite the lower clocks.
    Which puts it relatively ahead of Nvidia's mid-range mobile offerings at the same (or lower) price-tag.

    But, Nvidia can get away with rebranding and minor revision because they are big, relatively recognized, and most wide-spread.
    AMD did it as well with their 6xxx series gpu's for the most part if I'm not mistaken.
     
  25. masterchef341

    masterchef341 The guy from The Notebook

    Reputations:
    3,047
    Messages:
    8,636
    Likes Received:
    4
    Trophy Points:
    206
    Are we saying that game publishers should start noting the mobile part requirements separately instead of leaving us to figure them out? I'm for that.

    Other than that, I don't really think we disagree on anything. Yes, mobile parts are significantly less powerful. This is not a sudden development.

    The only thing I definitely disagree with is taking a stance like: "trying to do consumer research to get a decent mobile GPU is pointless because no matter what desktops are much better and all mobile graphics aren't nearly as good"

    The problem I have isn't with the fact (mobile GPU < desktop GPU), it's with the logic:

    (A < B, I need something from category A, but screw it because I know B exists)


    ---

    here is a link to a nice chart that basically shows you where many of these cards fall in terms of performance. It's not complete, but it gives you a more holistic perspective than notebookcheck.

    http://www.tomshardware.com/reviews...rd-geforce-gtx-590-radeon-hd-6990,2879-7.html