Hi,
As I am sort of fan of CUDA and GPGPUs, I was wondering what is the expected performance of the upcoming GTX480M in terms of TFLOPS? A single Radeon Mobility 5870 supposedly hits raw processing performance of 1.1-1.2 TFLOPS (according to wiki), which is pretty impressive, and even more so in CrossFire setup. Again, according to wiki, GTX480M reaches 'only' the half of that, ~0.6 TFLOPS, but I have certain doubts that someone has already benchmarked this card so thoroughly... Can these figures be true? If they are, this is kind of sad, because then a single 5870 yields just the same productivity as a SLI of 2 GTX480M cards, and is much more power-efficient at the same time. What is more, a CF setup would still have less power consumption and deliver almost the double of raw processing power of a SLI setup...
(All figures are for single-precision arithmetic.)
-
It would seem the shader core processes of Nvidia and ATi are not equivalent and they work differently.
But it would seem ATi is better at DirectCompute, there are some benchmarks showing even desktop HD5870 have more than 5x the DirectCompute power of GTX 480. And this is with two different benchmark software I've seen, both of them showing ATi equivalents just crushing Nvidia. Worse than Nvidia tessellation vs ATi tessellation.
- This applies to OpenCL also. ATI just demolishing Nvidia.
Nvidia has shown they are better at Tessellation.
But games/applications have yet to truly take the power of GPGPU yet. When they do, I think ATi will shine.
http://www.ngohq.com/home.php?page=Files&go=cat&dwn_cat_id=25&go=giveme&dwn_id=937
http://www.tomshardware.com/reviews/radeon-hd-5870,2422-7.html
- IMO Nvidia made a huge mistake. They thought Tessellation would be the DX11 feature, IMO it's not. I don't care about and think few developers will either. It's DirectCompute that is the mother load for DX11.
-
mobius1aic Notebook Deity NBR Reviewer
What ziddy123 quoted in his post reminds me of the 'ol Geforce 7900 vs Radeon 1900 XT fight. While games in that era ran better on the 7900, games now these days run better on the Radeon 1900 thanks to it's very high pixel shader capabilities. Sure both are far outdated GPUs, but it shows ATi was thinking ahead.
-
Yeah, but playing today's games on either of those will still mean you are restricted to low details ;p Really, I think the difference lies in the drivers a lot of the time. ATI is notorious for poor driver support following a new-ish release, but after a year or so they seem to catch up and stabilize.
-
-
I think he was talking about the 7900 and the 1900 XT.
-
-
The OP never ever mentions a 7900 or a 1900 XT, ever... -
-
-
Whatever. Your reply was beside the point of the person you quoted, so you fail. Is it that hard to say "woops, I was wrong / read too fast" ?
-
calm down ppl and be more clear.. anyways back to topic.. IMO , GTX480M kills itself.. lacking computing power and massive TDP... also i still haven't seen any solid performance benchmarks on 480M... but after my experiences with NVDIA , i'm getting ATI even if it is 10% lousier..
-
Well why not wait for some real benchmarks before saying GTX480m kills itself. So far I haven´t seen any real benchmarks. So I´ll just wait until some user gets hold of one of these and benchmark.
-
Well 100 Watts is pretty substantial. I can only imagine an hour of battery life max on that thing considering that a 2 watt difference in HDD to SSD can make or break 20 minutes.
-
In a 17" machine, battery life is a distant secondary consideration so I don't see why people are so uptight about the power consumption.
-
If 480M SLI's GPU score (note I said GPU since vantage's overall score is still heavily influenced by the CPU) can beat my score (see sig) by 20-30% I'll be impressed. I somehow doubt that will happen though..very much so.
-
I think what people are missing is that when it comes down to it, will you give a **** which is better in a year? If you intend to hang on to whatever you buy for 2+ years the answer is no, you'll want whatever is the most stable and efficient that can still run games decently without frying eggs. If you intend to buy a new lappy every year or so like some people here do, the answer is yes, you want a 5870, because nobody is going to want a first gen fermi with all it's inevitable problems and exorbitant cost when you go to resell your lappy. DX11 and all the directcompute/opencl stuff is only just making into bleeding edge games now, by the time it is all common mainstream both the 480 and the 5870 will struggle to run things.
All the aside I think nvidia's "huge mistake" is pricing the fermi so high. If it costs even $100 more than a 5870 (it'll be closer to like $300) you are quite literally paying $20-60 for a 1 FPS increase. I doubt they'll even sell enough 480's to cover production and marketing costs and either be forced to reduce the price or do what nivida does best, and just rebadge the fermi over and over again for the next 4 years to save money. -
Right now the 480m is around $600ish more then the same machine with a 5870 making it even better bargain lol....
I am still anxious/interested to see the final scores of the card in equal machines but the pricing is just stupid even if you have money to burn.....
Example of a barebones with a 480m vs a barebones with a 5870...
http://rjtech.com/shop/index.php?dispatch=products.view&product_id=29905 480m
http://rjtech.com/shop/index.php?dispatch=products.view&product_id=29836 5870m -
To get the best, you will always pay a massive premium. TDP figures are misleading since they are measured differently by ATI and nVidia. ATI's figure based off of nVidia's rating would be closer to 75-80W vs. the GTX 480M's 100W. But cooling is another topic altogether...
-
If you're a developer, sure, go ahead and get ATI/AMD cards as OpenCL is maturing now. You can get better performance from ATI/AMD but you have to optimize it yourself to take advantage of this (for example ATI/AMD better at flops, but slower at memory access, so make your program access the memory as least as possible - optimize your kernel to do that). But take note that with Nvidia you get both CUDA (more advance capability than OpenCL right now) and OpenCL. -
These are high-end cards, and max settings need to be taken into consideration. You'll be running nothing but 1080p with the 480M, and that's where the memory bandwith advantage takes the lead. I'm much more interested in extreme vantage scores, as example. All we see are performance scores. Vista is dead, vantage is already aging. 1280x1024 is no longer enough. And that's why CPU's are taking a bit too much of a bearing on scores when you get to CF 5870. If Ati is one step ahead of Nvidia (and they're not), I'm way ahead of the curve.
-
The memory bandwidth advantage of the 480M is only 20%, though, which isn't that much to speak of. Based on what I've seen of Fermi, it's not really going to increase the 480M's lead at 1080p much. In fact, in most of the benchmarks I've seen ATI's cards actually gain on higher bandwidth Fermis cards as the resolution goes up, e.g. with the desktop 5870 vs the 480.
-
Except the desktop 5870 isn't a desktop 5770, which is what the mobility 5870 really is. Ati's Cypress architecture may handle increased resolutions relatively well, but do not forget the mobile 5870 is making do on a 128-bit bus.
The reason the 480M has only 20% more bandwidth is the significantly lower memory clock (600 to 1000mhz). Nvidia greatly dropped clockspeeds to fit the power restraints. This card could be an overclockers dream. It's like a Lamborghini running on 8 cylinders... -
-
from notebookcheck
-
As for the memory clocks, you might be able to get a solid boost, but I've read that Nvidia has had difficulty with GDDR5 - consider that their highest-end card, the desktop GTX 480, still only runs its memory at 924MHz, while AMD has been getting speeds of 1200MHz.
As such, if we take overclocking into account as well, it's possible that the advantage might rise to as much as 50%. However, once again I'm doubtful as to how much of a performance gain you will see without a shader overclock, and I'm not sure how much you'll be able to overclock the 480M given thermal limitations.
Besides that, there are already benchmarks suggesting the 480M doesn't have much of a lead over the 5870 at high settings and resolutions. I'm holding out until I see the two cards tested in the exact same setup, though. -
yet another GTX480M performance thread
Discussion in 'Gaming (Software and Graphics Cards)' started by Marin85, Jun 12, 2010.