Most of the evidence I've seen suggests the MR 5870 isn't bottlenecked by memory bandwidth, so I don't see the basis for this assumption.
You can't simply compare different architectures based solely on the number of shaders. If that were the case, the GTX 480 would be twice as fast as the GTX 285, and it simply isn't - it's more like 50% faster.
-
-
The low end Fermi chips are ... well they are trimmed down versions of the bigger chip. It's the same architecture, but they make the chip smaller so that they can have higher yields and thus make them cheaper. The same applies for the mobile version, the difference being that the mobile versions is a special trimmed down chip.
But overall you are right, looks like 480M is probably going to have the same core as the GTS 430.
EDIT:
GeForce GTS 430
192 shader cores, 48 TMUs, 24 ROPs and have a 192-bit memory bus
675MHz/1350MHz
GDDR5 @ 1440 MHz.
Performance ~HD 5770. - which is about 40% of the GTX 480 performance. -> This backs up my initial calculations. -
Well if the 2gb GDDR5 claim holds true, then the chip would have to be a 256 bit interface or 128 bit. But if the GTX 480M were that, then it should perform similarly to a desktop GTX 260 (192 core version) in DX9 and DX10 games.
That said, 8500 stock Vantage GPU score or bust for this card. I know gaming performance is more important, but this GPU needs the raw power to answer for the 1 year hiatus (really 3 years if you consider the rebadging) in the high end notebook market. -
192 cores is a let down. my dual 8800's have 192 cores.
-
I guess that makes my girl friend a let down too, she only has 2 frontal play things when I prefer 3 (just like on Total Recall the movie).
-
Why would it have to be 256 or 128?
-
wonder what the Overclockabilty will be compared to the 5870 which does get a decent 10-20% OC.
-
If the 480m isn't at least near 50% faster than the MR 5870, than I think the 480m will be a big failure. The higher power consumption / cost has to translate in drastically better performance than the mobility 5870.
-
theres no way its going to be 50% faster.....lol.
-
Well how much faster is MR5870 CF vs. one 5870?
-
i havent seen anything that says this 480 is dual core or SLI, did i miss something?
-
Memory chips are only made in certain sizes and bit widths. To get 2GB the buss would have to be a multiple of 128 basically since you are not going to have big enough chips for anything smaller. This means 128 or 256 realistically though technically it could be larger.
-
I'll be getting the W870CU very soon, and it will have the 5870m. I can't justify waiting for the 480m, even if it is 10-15% faster, it just isn't worth the extra money, heat, or energy consumption. Nvidia is definitely going in the wrong direction.
-
Well, you never know, nvidia might just surprize us with a 50% performance boost or something.
-
it wont be a 50% boost.
-
I agree, I don't see that happening, there would have been alot more leaked info if that were the case. The lack of info leads me to believe they are not gonna be that much better.
-
If it's a 50% boost the w870cu will have a big advantage over the g73. I think even a 30% boost would be good.
-
I hope it is, just to light a fire under ATi's feet and maybe get the 59xx series out for the capella platform. It would be a nice upgrade path...
-
Also the 2GB frame buffer is a complete waste. Just a marketing gimmick to get people to waste more money all while consuming even more power. Power they could probably better use upping the core speed instead which would actually increase frame rates.
-
Why did someone say this would perform as well as two 5870s? Is there any evidence to prove this, or is this a dual core GPU (its not) Or is that just VERY wishful thinking...?
-
Whoever said it was very wishful indeed.
-
It's just a speculation, but from the 2gb GDDR5, the huge power intake, and the dual DVI port in the new notebook with the 480M, I won't be suprized if the 480M comes out to be a dual core GPU.
-
As high as 100W is, it's still much too low for the TDP of a dual-GPU Fermi. Perhaps it could be right if each of the two cores is very weak, but then why would Nvidia make it two cores instead of just one with twice the shaders?
I'd say the 2GB GDDR5 is either a mistake or a gimmick, and the dual DVI port is a feature to make this card a little more unique. -
I wonder if it will even fit into the D900F without a new heatsink...
-
A dual core mobile GPU would be ridiculous. Also 100W would be way too low. It would have to be something like 150W to compete with SLI/Crossfire solution.
The 2GB GDDR5 really makes no sense to me, considering the higher price for GDDR5 memory and especially for the bigger modules. What I mean is, that they have two options here: Putting eight 256 MB modules on top of the card, which would be very expensive or putting eight 128 modules on top and 8 modules under the card. However, in the latter configuration they would need a second copper heatsink under the card, which is probably the reason why the W881 is thicker. But it does not seem logical to make such big adjustments just for one single card. But without a heatsink, if the memory modules under the card are just protected by thermal pads and not connected to any copper, we will see them failing alot. -
If my previous speculation is correct, and the 480M is just 10-15% faster than the 5870, then it makes sense for Nvidia to add the 2gb GDDR5 ... they need to somehow make this card sell eventough is not really competitive.
-
Just got myself some free time and I've looked again over previous generations of cards and I've tried to see how they have been scaled for the notebook market.
I am pretty sure now that the notebook version of Fermi will give about 45% of the GTX 480 performance. As about 9000 points in Vantage at stock settings. Given that ATI cards usually score higher in Vantage than Nvidia cards that have the same performance, I think this will make the 480M GTX about 20% faster than the ATI Mobility 5870.
In terms of shaders I still think the core will have around 192-224 shaders, but now I am more inclined towards a 224 shaders core.
EDIT:
I've tried to do some estimations based on the current 3x0M series and see what would happen if Nvidia would come out with a 380M. I am pretty sure that this time my estimations are very accurate. A 380M would have 192 shaders, consume 65W-70W and give an estimated vantage P score of around 11500 (w/o PhysX). It would definitely kill the ATI 5870 but will only have DirectX 10.1. This actually points out the fact that Fermi architecture gives less performance per watt than the old G92b tech., which is a well known fact.
I think it's time for me to stop these pointless calculations
.
-
I guess fermi is after all really is a big disappointment in terms of performance per shader... As to the GTX 380M, that's what I meant, imagine a GTX 280M with 53% more shaders, that thing would rock, but of course it would also lack DX11. That the GTX 480M only has tops 20% more performance is not that much of a problem, more performance is more performance. The problem is, that the price/performance ratio is just so bad that the higher price does not justify so little improvement over the MR 5870.
-
GTX 380M, if it exists, I think would be a die shrunk GTX 260.
-
the 6800 go, 7800 and 7950 gtx, 8800 gtx were all as fast as their desktop countertops. there has not been a generation leap in gpu power for the laptop world since going from 7900 to 8800 series. the 8800 G92-based cards are still the fastest we have and compete with 5870MR. i am expecting the 480 to be twice as fast as the G92 in SLI or it would be another dissapointment.
-
Prepare to be disappointed.
-
Well, a chip faster than 8800M in SLI is possible. A chip faster than GTX 280M in SLI is probably not.
On the desktop side, I know that 9800 GTX in SLI are faster than a GTX 260, so expecting something faster than the GTX 260 is not a feasible wish. -
Nvidia has kept G92 around for quite a long time, but remember that the G80 was essentially revolutionary; it's no wonder that Nvidia is still using essentially the same card, even in their desktop GTS 250. Also, keep in mind that the fastest laptop G92, the GTX 285M, has twice the shaders (128 vs 64) and around 20% faster clock speeds than the slowest one, the 8800M GTS, so there's quite a lot of variance in the G92 chips.
I would guess that you're looking for performance that is much better than your 8800M GTX SLI config - it will be better than your SLI by something like 50-100% depending on how well the SLI configuration scales for specific games. -
You may disagree with me, but if you look at performance per watt and die size, the G92b architecture on 40nm is superior to current Fermi architecture as well as that of the 5000 series. People keep complaining about Nvidia recycling old tech., but as long as it delivers performance why complain ?
-
The GTS 250 1GB is indeed slightly better than the HD 4850 in performance per watt, with both being 55nm at pretty much the same die size (see here (die size) and here/ here (power consumption)), and so perhaps you're right in that a G92 could be more efficient at 40nm than AMD's 5000 series. However, were this the case, it still wouldn't be by much. Mind you, the HD 5000 is disadvantaged somewhat in this regard by the addition of new features, particularly DirectX 11.
One thing to take out of those figures is that, in fact, the GT200 is actually a reasonably efficient GPU for its fab process, despite what one might think. In particular, the GTX 260 Core 216 is probably the single most efficient (in terms of performance / power) GPU ever made with a fab process of 65nm or more. However, the fact that the gains going from 65nm to 55nm for GT200 were minimal is sufficient proof that die shrinks aren't quite a magic bullet, and enough for me to say that overall the HD 5000 series still holds the efficiency crown.
Nonetheless, considering the enormous die size of GT200, it's amazing that their power consumption is actually relatively low. Still, the die size came with an even bigger disadvantage - cost. -
Larry@LPC-Digital Company Representative
I think they passed over the 380M for the 480M. -
I would agree with this, but if we're talking about DX10 gaming. In DX11, the 5k series would be much more efficient due to the built in tesselators carried from the 4k series. Part of the reason why Ati was able to come out with the 5k series long before nVidia could respond.
-
Sure you will need new VGA heatsink and maybe even a new AC Adapter to support TDP required by both the VGA (100W) and CPU i.e. i7-980X 130W. D900F is designed to support up to 300W of total TDP
-
taking sli scaling and support hassles out of multi-gpu setups; if the 480 gtx only has 192 cores, same as the 8800gtx in sli, there may not be a performace increase that warrants upgrading the laptop for g92 multi-gpu owners like myself, d900c, m17x, m1730, etc.
6800, 7800, 8800, all these were a revolutionary leap because they rivaled their desktop counterparts. we need to see this again with the 480 fermi. -
Mobile GPUs have never been and will never be "revolutionary", because they always trickle down from true revolutions in the desktop parts.
Nvidia's G80 was a true revolution, and when that performance translated into G92 and then came to laptops it was huge. The advent of RV770 from AMD was also quite a big advancement as well (though mostly in terms of cost), and again that trickled down into laptops.
Fermi just isn't as big a step forwards as G80, G92 or RV770 was. At the moment, in desktop GPUs it's only 10-15% faster than the HD 5870, and the power consumption makes it harder still for this to translate into mobile parts. -
Yes, I was talking about DX 10. Hopefully Nvidia can pull themselves together and come up with something good otherwise their future is not very bright.
-
the D900f will be seeing this and with a cooling modification last i heard .... a rather drastic one
-
Drastic one? Like water/liquid nitrogen cooling with built in mini-freezer?
-
slide out drink holder to !
-
The D900F is already offered with a 100W gpu - the FX 3800M, so i dont see why it would need a new adapter..
I also dont see why youd need a drastic cooling modification, since its doing a great job there - im idling on 42C and getting nowhere over 65C at load with the 280m.. But the heatsink will probably have to be modified to accomodate it (like for the 5870 im trying now).. -
I'm still wondering how the 2048MB GDDR5 memory is divided. I mean 8x256MB GDDR5 on the topside of the card would be pretty pricy. GDDR5 is expensive anyway and 256MB modules will be considerably more expensive than 128MB modules. But there are only two other options: 8x128MB upside and 8x128MB underside, but that would be risky as they will most likely not put a second copper heatsink under the GPU and the underside modules would run very hot then. The other option would be 16x128MB upside, but where would be space for that? The card would look quite different from other GPUs and we would need a completely new heatsink.
But if the GPU is really that much more expensive, then they proably went for 8x256MB GDDR5 modules and that's one reason for the high price. Would be cool if they make a 1GB GDDR5 version for less money... -
Most laptops already have an optical drive.
-
Is it just me, or did Clevo take down the W881CU page?
-
They took it down two days ago.
-
So is this vapor ware still, or does EVERYONE here still think its going to be "50-100% faster than the 5870?
GTX 480M 2GB GDDR5 DX11 PhysX 100W GPU?
Discussion in 'Sager and Clevo' started by AndrewKW, May 1, 2010.