The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    yet another GTX480M performance thread

    Discussion in 'Gaming (Software and Graphics Cards)' started by Marin85, Jun 12, 2010.

  1. Marin85

    Marin85 Notebook Consultant

    Reputations:
    49
    Messages:
    125
    Likes Received:
    0
    Trophy Points:
    30
    Hi,

    As I am sort of fan of CUDA and GPGPUs, I was wondering what is the expected performance of the upcoming GTX480M in terms of TFLOPS? A single Radeon Mobility 5870 supposedly hits raw processing performance of 1.1-1.2 TFLOPS (according to wiki), which is pretty impressive, and even more so in CrossFire setup. Again, according to wiki, GTX480M reaches 'only' the half of that, ~0.6 TFLOPS, but I have certain doubts that someone has already benchmarked this card so thoroughly... Can these figures be true? If they are, this is kind of sad, because then a single 5870 yields just the same productivity as a SLI of 2 GTX480M cards, and is much more power-efficient at the same time. What is more, a CF setup would still have less power consumption and deliver almost the double of raw processing power of a SLI setup...

    (All figures are for single-precision arithmetic.)
     
  2. ziddy123

    ziddy123 Notebook Virtuoso

    Reputations:
    954
    Messages:
    2,805
    Likes Received:
    1
    Trophy Points:
    0
    It would seem the shader core processes of Nvidia and ATi are not equivalent and they work differently.

    But it would seem ATi is better at DirectCompute, there are some benchmarks showing even desktop HD5870 have more than 5x the DirectCompute power of GTX 480. And this is with two different benchmark software I've seen, both of them showing ATi equivalents just crushing Nvidia. Worse than Nvidia tessellation vs ATi tessellation.
    - This applies to OpenCL also. ATI just demolishing Nvidia.

    Nvidia has shown they are better at Tessellation.

    But games/applications have yet to truly take the power of GPGPU yet. When they do, I think ATi will shine.

    http://www.ngohq.com/home.php?page=Files&go=cat&dwn_cat_id=25&go=giveme&dwn_id=937

    http://www.tomshardware.com/reviews/radeon-hd-5870,2422-7.html
    - IMO Nvidia made a huge mistake. They thought Tessellation would be the DX11 feature, IMO it's not. I don't care about and think few developers will either. It's DirectCompute that is the mother load for DX11.

    Of course the problem for ATi is that they always seem to guess right and provide relevant technology to improve everyone's gameplay. But they fail to get developers on board and really utilize their technological offerings. I think it's starting to change. As we can see already, per power, per watt, AvP, BC2 and Dirt 2 runs substantially better on ATi than on Nvidia. To get the same performance, you need a lot of Nvidia resources.
     
  3. mobius1aic

    mobius1aic Notebook Deity NBR Reviewer

    Reputations:
    240
    Messages:
    957
    Likes Received:
    0
    Trophy Points:
    30
    What ziddy123 quoted in his post reminds me of the 'ol Geforce 7900 vs Radeon 1900 XT fight. While games in that era ran better on the 7900, games now these days run better on the Radeon 1900 thanks to it's very high pixel shader capabilities. Sure both are far outdated GPUs, but it shows ATi was thinking ahead.
     
  4. hakira

    hakira <3 xkcd

    Reputations:
    957
    Messages:
    1,286
    Likes Received:
    0
    Trophy Points:
    55
    Yeah, but playing today's games on either of those will still mean you are restricted to low details ;p Really, I think the difference lies in the drivers a lot of the time. ATI is notorious for poor driver support following a new-ish release, but after a year or so they seem to catch up and stabilize.
     
  5. ziddy123

    ziddy123 Notebook Virtuoso

    Reputations:
    954
    Messages:
    2,805
    Likes Received:
    1
    Trophy Points:
    0
    Wow, you really have no idea what you are talking about. On current games, except Metro 2033, both HD5870M and GTX 480M can play current games on highest details.
     
  6. EchoShade

    EchoShade Notebook Evangelist

    Reputations:
    97
    Messages:
    371
    Likes Received:
    0
    Trophy Points:
    30
    I think he was talking about the 7900 and the 1900 XT.
     
  7. Megol

    Megol Notebook Evangelist

    Reputations:
    114
    Messages:
    579
    Likes Received:
    80
    Trophy Points:
    41
    I think you should re-read the thread instead of insulting someone who's right. The post was about GeForce 7900 and Radeon 1900 XT.
     
  8. ziddy123

    ziddy123 Notebook Virtuoso

    Reputations:
    954
    Messages:
    2,805
    Likes Received:
    1
    Trophy Points:
    0
    You should re-read the thread before posting. This thread is about GTX 480M vs HD5870 in terms of computational power and use of GPGPU/DirectCompute/OpenCL.

    The OP never ever mentions a 7900 or a 1900 XT, ever...
     
  9. Manic Penguins

    Manic Penguins [+[ ]=]

    Reputations:
    777
    Messages:
    1,493
    Likes Received:
    0
    Trophy Points:
    55
    Ziddy, if your really that daft, here is the post hakira was refering to, the one directly above his post.
     
  10. ziddy123

    ziddy123 Notebook Virtuoso

    Reputations:
    954
    Messages:
    2,805
    Likes Received:
    1
    Trophy Points:
    0
    I am not daft. Thread and post are distinctly two different things.
     
  11. Botsu

    Botsu Notebook Evangelist

    Reputations:
    105
    Messages:
    624
    Likes Received:
    0
    Trophy Points:
    30
    Whatever. Your reply was beside the point of the person you quoted, so you fail. Is it that hard to say "woops, I was wrong / read too fast" ?
     
  12. sean473

    sean473 Notebook Prophet

    Reputations:
    613
    Messages:
    6,705
    Likes Received:
    0
    Trophy Points:
    0
    calm down ppl and be more clear.. anyways back to topic.. IMO , GTX480M kills itself.. lacking computing power and massive TDP... also i still haven't seen any solid performance benchmarks on 480M... but after my experiences with NVDIA , i'm getting ATI even if it is 10% lousier..
     
  13. Magnus72

    Magnus72 Notebook Virtuoso

    Reputations:
    1,136
    Messages:
    2,903
    Likes Received:
    0
    Trophy Points:
    55
    Well why not wait for some real benchmarks before saying GTX480m kills itself. So far I haven´t seen any real benchmarks. So I´ll just wait until some user gets hold of one of these and benchmark.
     
  14. EchoShade

    EchoShade Notebook Evangelist

    Reputations:
    97
    Messages:
    371
    Likes Received:
    0
    Trophy Points:
    30
    Well 100 Watts is pretty substantial. I can only imagine an hour of battery life max on that thing considering that a 2 watt difference in HDD to SSD can make or break 20 minutes.
     
  15. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    In a 17" machine, battery life is a distant secondary consideration so I don't see why people are so uptight about the power consumption.
     
  16. 5150Joker

    5150Joker Tech|Inferno

    Reputations:
    4,974
    Messages:
    7,036
    Likes Received:
    113
    Trophy Points:
    231
    If 480M SLI's GPU score (note I said GPU since vantage's overall score is still heavily influenced by the CPU) can beat my score (see sig) by 20-30% I'll be impressed. I somehow doubt that will happen though..very much so.
     
  17. hakira

    hakira <3 xkcd

    Reputations:
    957
    Messages:
    1,286
    Likes Received:
    0
    Trophy Points:
    55
    Aside from the obvious huge drain on battery life, power means heat, and heat means bad. Hell even a sager with a 5870 right now can get up to 80-85c while not being the quietest thing out there, so what will double the amount of power going in create?

    I think what people are missing is that when it comes down to it, will you give a **** which is better in a year? If you intend to hang on to whatever you buy for 2+ years the answer is no, you'll want whatever is the most stable and efficient that can still run games decently without frying eggs. If you intend to buy a new lappy every year or so like some people here do, the answer is yes, you want a 5870, because nobody is going to want a first gen fermi with all it's inevitable problems and exorbitant cost when you go to resell your lappy. DX11 and all the directcompute/opencl stuff is only just making into bleeding edge games now, by the time it is all common mainstream both the 480 and the 5870 will struggle to run things.

    All the aside I think nvidia's "huge mistake" is pricing the fermi so high. If it costs even $100 more than a 5870 (it'll be closer to like $300) you are quite literally paying $20-60 for a 1 FPS increase. I doubt they'll even sell enough 480's to cover production and marketing costs and either be forced to reduce the price or do what nivida does best, and just rebadge the fermi over and over again for the next 4 years to save money.
     
  18. Quadzilla

    Quadzilla The eye is watching you

    Reputations:
    7,074
    Messages:
    8,376
    Likes Received:
    94
    Trophy Points:
    216
  19. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    To get the best, you will always pay a massive premium. TDP figures are misleading since they are measured differently by ATI and nVidia. ATI's figure based off of nVidia's rating would be closer to 75-80W vs. the GTX 480M's 100W. But cooling is another topic altogether...
     
  20. f4ding

    f4ding Laptop Owner

    Reputations:
    261
    Messages:
    2,085
    Likes Received:
    0
    Trophy Points:
    55
    It depends whether you're a developer or a user. As an end-user that uses programs that take advantage of GPU, you're better off with Nvidia GPUs. Stop worrying about TFLOPS, because while FLOPS wise nvidia gpus are losing, memory operation and such they're better than ATI/AMD's implementation. Not to mention drivers are generally better on the Nvidia side. Add mature CUDA SDK and nvidia optimized programs are everywhere. F@H is one easy example.

    If you're a developer, sure, go ahead and get ATI/AMD cards as OpenCL is maturing now. You can get better performance from ATI/AMD but you have to optimize it yourself to take advantage of this (for example ATI/AMD better at flops, but slower at memory access, so make your program access the memory as least as possible - optimize your kernel to do that). But take note that with Nvidia you get both CUDA (more advance capability than OpenCL right now) and OpenCL.
     
  21. anothergeek

    anothergeek Equivocally Nerdy

    Reputations:
    668
    Messages:
    1,874
    Likes Received:
    0
    Trophy Points:
    55
    These are high-end cards, and max settings need to be taken into consideration. You'll be running nothing but 1080p with the 480M, and that's where the memory bandwith advantage takes the lead. I'm much more interested in extreme vantage scores, as example. All we see are performance scores. Vista is dead, vantage is already aging. 1280x1024 is no longer enough. And that's why CPU's are taking a bit too much of a bearing on scores when you get to CF 5870. If Ati is one step ahead of Nvidia (and they're not), I'm way ahead of the curve.
     
  22. lackofcheese

    lackofcheese Notebook Virtuoso

    Reputations:
    464
    Messages:
    2,897
    Likes Received:
    0
    Trophy Points:
    55
    The memory bandwidth advantage of the 480M is only 20%, though, which isn't that much to speak of. Based on what I've seen of Fermi, it's not really going to increase the 480M's lead at 1080p much. In fact, in most of the benchmarks I've seen ATI's cards actually gain on higher bandwidth Fermis cards as the resolution goes up, e.g. with the desktop 5870 vs the 480.
     
  23. anothergeek

    anothergeek Equivocally Nerdy

    Reputations:
    668
    Messages:
    1,874
    Likes Received:
    0
    Trophy Points:
    55
    Except the desktop 5870 isn't a desktop 5770, which is what the mobility 5870 really is. Ati's Cypress architecture may handle increased resolutions relatively well, but do not forget the mobile 5870 is making do on a 128-bit bus.

    The reason the 480M has only 20% more bandwidth is the significantly lower memory clock (600 to 1000mhz). Nvidia greatly dropped clockspeeds to fit the power restraints. This card could be an overclockers dream. It's like a Lamborghini running on 8 cylinders...
     
  24. Althernai

    Althernai Notebook Virtuoso

    Reputations:
    919
    Messages:
    2,233
    Likes Received:
    98
    Trophy Points:
    66
    It could, but it is also quite possible that overclocking it will be difficult. Even if we ignore the heat, where is the extra energy going to come from? Are the already hefty power bricks going to be overspecced? How much can the MXM slot sustain (100W is already outside the specifications)?
     
  25. hakira

    hakira <3 xkcd

    Reputations:
    957
    Messages:
    1,286
    Likes Received:
    0
    Trophy Points:
    55
    from notebookcheck

    So it leaves me a little confused here... is 100W the MINIMUM power required, with it spiking to 130 on load? And people say ATI isn't quite accurate with their TPD calcs, heh. The way I'm reading this, the 480 uses exactly 2x the power of a 5870 no matter what you are doing - you would need an adapter capable of SLI/CF to run even a single 480. Unless they intend on shipping 300-350 PSU's with 480's, you won't be overclocking/overvolting this thing anytime soon.
     
  26. lackofcheese

    lackofcheese Notebook Virtuoso

    Reputations:
    464
    Messages:
    2,897
    Likes Received:
    0
    Trophy Points:
    55
    The example I put forth was to demonstrate that an advantage in memory bandwidth doesn't necessarily translate into better performance scaling into high resolutions. Sure, because those two desktop cards are very different, it's not proof that this will be the case for the 480M, but it does show that you shouldn't be too quick to make assumptions like yours.

    As for the memory clocks, you might be able to get a solid boost, but I've read that Nvidia has had difficulty with GDDR5 - consider that their highest-end card, the desktop GTX 480, still only runs its memory at 924MHz, while AMD has been getting speeds of 1200MHz.

    As such, if we take overclocking into account as well, it's possible that the advantage might rise to as much as 50%. However, once again I'm doubtful as to how much of a performance gain you will see without a shader overclock, and I'm not sure how much you'll be able to overclock the 480M given thermal limitations.

    Besides that, there are already benchmarks suggesting the 480M doesn't have much of a lead over the 5870 at high settings and resolutions. I'm holding out until I see the two cards tested in the exact same setup, though.
     
  27. notyou

    notyou Notebook Deity

    Reputations:
    652
    Messages:
    1,562
    Likes Received:
    0
    Trophy Points:
    55
    It won't likely get much of a boost because Nvidia's GDDR5 controller is crap. That's why they're forced to run it at such a low speed.