The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    9700M GT vs 9800M GS?

    Discussion in 'Gaming (Software and Graphics Cards)' started by euisungkim, Jul 23, 2009.

  1. euisungkim

    euisungkim Notebook Deity

    Reputations:
    203
    Messages:
    714
    Likes Received:
    0
    Trophy Points:
    30
    okay im buying a laptop right now and price range of $655~$720.

    and im thinkin of playin games and stuff, will there be a HUGE difference in games?
     
  2. Gophn

    Gophn NBR Resident Assistant

    Reputations:
    4,843
    Messages:
    15,707
    Likes Received:
    3
    Trophy Points:
    456
    Welcome to the NBR forums. :)

    extremely huge difference.

    9700M GT is mid-range (128-bit memory interface)

    9800M GS is high-end (256-bit memory interface)

    a high-end card will easily outperfom a mid range card by over TWICE as much.

    P.S. you should really read the GPU Guide Sticky before posting.
     
  3. euisungkim

    euisungkim Notebook Deity

    Reputations:
    203
    Messages:
    714
    Likes Received:
    0
    Trophy Points:
    30
    Hello Gophn! sorry about not reading GPU Guide.. its my first time.
    anyways thank you for your information.
    so what you're saying is, i can get twice as much in fps if i get 9800m gs instead of 9700m gt?
     
  4. Gophn

    Gophn NBR Resident Assistant

    Reputations:
    4,843
    Messages:
    15,707
    Likes Received:
    3
    Trophy Points:
    456
    correct.

    even more performance difference as you increase the resolutions... since the mid-range cards cannot drive the graphics at high resolutions as well as high-end cards.
     
  5. euisungkim

    euisungkim Notebook Deity

    Reputations:
    203
    Messages:
    714
    Likes Received:
    0
    Trophy Points:
    30
    how about CPU? does it make a difference?
    because one laptop im looking at is 700$, and other is 730$.
    cheaper one is Intel Core 2 Duo P7450 2.13 GHz Processor,
    and other one is Intel Core 2 Duo P8400 2.26 GHz Processor.



    edit: ASUS G50VT-X5 and ASUS G50VT-X1.
     
  6. Peter Bazooka

    Peter Bazooka Notebook Evangelist

    Reputations:
    109
    Messages:
    642
    Likes Received:
    1
    Trophy Points:
    31
    CPU makes only a little difference in most games but can make a huge difference in a handful of games (supreme commander is one example), but thats usually single vs dual core or dual vs quad core, pure clock is usually the most important. Until now I had never heard of the p7450 but looking at intel's site there is almost no difference in the processors (same cache, same fsb, same tdp, same socket, both 45mn) and in 99% of games the difference would be negligible. Unless there is a misprint the p8400 has Intel® Virtualization Technology while the other doesn't but I don't even know what the hell that is :)

    In conclusion I don't think it matters in this case, so do you have an extra $30 or not...
     
  7. Delta_CT

    Delta_CT Notebook Evangelist

    Reputations:
    102
    Messages:
    636
    Likes Received:
    0
    Trophy Points:
    30
    GTA4 is supposedly another extremely CPU intensive game.
     
  8. Red_Dragon

    Red_Dragon Notebook Nobel Laureate

    Reputations:
    2,017
    Messages:
    7,251
    Likes Received:
    0
    Trophy Points:
    205
    I think that Gap with 128 and 256 is closing. The 4860 is 128 bit but they say the peformance is better then the 4850.
     
  9. Gophn

    Gophn NBR Resident Assistant

    Reputations:
    4,843
    Messages:
    15,707
    Likes Received:
    3
    Trophy Points:
    456
    performance gap is closing at average resolutions... not at high resolutions though.
     
  10. neilnat

    neilnat Notebook Evangelist

    Reputations:
    255
    Messages:
    655
    Likes Received:
    0
    Trophy Points:
    30
    First, the gap is only closing if you compare new, best-in-class 128 bit cards with last-gen worst-in-class 256-bit cards. And, to repeat, only at lower resolutions.

    About the 4860... 2 things...

    (1) It hasn't been released yet after months of talk

    (2) When DDR5 is used on more mobile cards, there will still be a large gap between 128 bit and 256 bit, it's just that the new 128 bit cards will equal the old 256 bit cards...

    Theoretically, the 4860 should perform similar to and maybe a little better than a 4850. We'll see when it comes out.
     
  11. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    The HD 4860 is the only 128-bit card that can perform on 256-bit card levels because GDDR5 is quad pumped vs. GDDR3, which is double pump (ie. 4 vs. 2 32-bit words per clock cycle), effectively doubling the bus width.
     
  12. Melody

    Melody How's It Made Addict

    Reputations:
    3,635
    Messages:
    4,174
    Likes Received:
    419
    Trophy Points:
    151
    GDDR5 128bit GPUs haven't yet appeared in the mobile world though IIRC so their yield is only based on results we've noticed of the desktop world in this regard and as said before, they'll lose out in terms of higher resolutions(although I do believe GDDR5 128bit GPUs can push a relatively high resolution before hitting problems)

    It all depends what resolution you're playing at honestly. A 9700M GTS isn't that far ahead of say an HD4670 in terms of lower resolution.
     
  13. KernalPanic

    KernalPanic White Knight

    Reputations:
    2,125
    Messages:
    1,934
    Likes Received:
    130
    Trophy Points:
    81
    Well people get so caught up in the 128-bit vs 256-bit thing... this is simply the memory interface and how many lanes it has... bigger is better generally, but it is NOT always the determination of which card is faster.

    Generally speaking, GPU makers make this decision for you and in general those cards chosen for 256-bit memory interfaces are the ones that can actually USE them...

    It DOES bottleneck a powerful-enough card and thus finding this interface is an indication the card is designed for more...

    One example, the 8700mGT was 128-bit and eventually out shined its predecessor the 7950GTX (a 256-bit card) in even DX9 titles once they got shader-intensive.
    Note, this as due to technology as the newer shaders in the 8700mGT, once they had proper drivers blew the doors off the older-tech 7xxx series shaders even with the best that series could offer.

    The previous posters ARE correct that the 9700 is no match for the 9800m GS however. Note please the 9800m GS also has TWICE the shader units...
     
  14. Melody

    Melody How's It Made Addict

    Reputations:
    3,635
    Messages:
    4,174
    Likes Received:
    419
    Trophy Points:
    151
    Yeah it's like that misconception of the "128bit GPUs can't use more than 256mb of VRAM". It's based on generalities in terms of the GPUs which normally accompany certain bus widths. I mean, it's not "false", but it's not an absolute either.
     
  15. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    128-bit GPUs can but rarely use more than 256MB. The difference in performance between 256MB and 1GB is negligible in most situations, that's why the memory amount for GPUs isn't a significant factor on determining a GPU's strength.
     
  16. Melody

    Melody How's It Made Addict

    Reputations:
    3,635
    Messages:
    4,174
    Likes Received:
    419
    Trophy Points:
    151
    Well experimentally they usually don't(like 99% of the time) so it's been agreed that they can't, but not because of some magical reason associated with the bus width. It's because VRAM is largely most advantageous when gaming at high resolutions, something which mid-class GPUs(those with smaller bus sizes) normally can't handle period(due to the GPU strength), hence the extra VRAM is "wasted".

    VRAM should never be used as a determining factor amongst most GPUs(the rare exception being high end GPUs if you're an avid fan of high resolutions). I could stick 50Gbs of VRAM on an HD4500 and it'd still suck lol :p
     
  17. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    That's what I said...
     
  18. Melody

    Melody How's It Made Addict

    Reputations:
    3,635
    Messages:
    4,174
    Likes Received:
    419
    Trophy Points:
    151
    Course it is, I was just rehashing with more specific info since you stayed at the premise. Don't get mad >.<
     
  19. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    I'm not mad, I was just confused since I didn't state that "128bit GPUs can't use more than 256mb of VRAM" (is that the premise you were referring to)?
     
  20. Melody

    Melody How's It Made Addict

    Reputations:
    3,635
    Messages:
    4,174
    Likes Received:
    419
    Trophy Points:
    151
    Oh I wasn't trying to contradict you, merely to supplement the statement you already said, but with more in depth explanations :D

    Fact of the matter is that GPUs get better generation after generation. Even now a 64bit with GDDR3 like the HD4570 performs very well for its class.
     
  21. Red_Dragon

    Red_Dragon Notebook Nobel Laureate

    Reputations:
    2,017
    Messages:
    7,251
    Likes Received:
    0
    Trophy Points:
    205
    I see what you are saying, but this chart sort of contradicts that.

    [​IMG]

    It shows that at 1900 x 1200 it is still stronger.

    Basically it is showing that 9,000 3dmark for 4850 and something like 11,000 for the 4860. Thats what i think i am going to get to replace my 9800 GTS. I have been monitoring it closely
     
  22. neilnat

    neilnat Notebook Evangelist

    Reputations:
    255
    Messages:
    655
    Likes Received:
    0
    Trophy Points:
    30
    That chart contradicts nothing that I'm trying to say. When i mention lower resolutions, I'm only talking about current 128 bit cards and the 4860 is still MIA. As far as the DDR5 cards, read my point number 2 again... when DDR5 actually hits, I agree that it will turn 128 bit cards into 256, but it will also turn 256 bit cards into true monsters. My last sentence DID say that the 4860 is supposed to perform better than the 4850... personally it wouldn't surprise me to go either way when we see them in laptops. I still theorize that Nvidia meant the 9800 GTS to be better than the 9800 GT since that is usually their naming convention.
     
  23. Melody

    Melody How's It Made Addict

    Reputations:
    3,635
    Messages:
    4,174
    Likes Received:
    419
    Trophy Points:
    151
    Easiest way to look at it is through the desktop world. The GDDR5 128bit HD4770 can perform on par with the GDDR3 256it HD4850, but it's still not able to compete against the GDDR5 256bit HD4870.

    However, the addition of GDDR5 will allow for more powerful mid-range GPUs which will cost less than increasing bus size. By the time it's fully implemented and if it's done well, "mid range 128bit GPUs" will be worlds from where they are now :D