I don`t quite get it:
How do they make 128bit with 3GB?
Each VRAM is 32bit
32bit x 6 = 192bit
256MB x 6 = 1.5GB
512MB x 6 = 3GB
32bit x 4 = 128bit
256MB x 4 = 1GB
512MB x 4 = 2GB
-
-
I did the math wrong.
This is interesting and stupid at the same time, because the 660M might crush the 670M. If true it confirms my initial statement, which was that Nvidia shouldn't have rebranded the 570M in the first place. -
Well the 570m, and 670m by default, are very underclocked so the 660m should have little problems getting near or even above the performance of 570m stock.
-
So Eurocom will have a 1.5GB 192bit 660M, Asus will offer two versions: 3GB 192bit 660M and 2GB 128bit 660M.
670M config cost $260 more than 660M. If a 192bit 660M beats the 670M, then shouldn`t it cost more than 670M?
Bandwidths:
GTX 675M/GTX 580M 256bit GDDR5: 96GB/s
GTX 660M 192bit GDDR5: 96GB/s
GTX 670M/570M 192bit GDDR5: 72GB/s
GTX 660M 128bit GDDR5: 64GB/s
GTX 560M 192bit GDDR5: 60GB/s
Stupid question: Is it Nvidia who decide what memory configurations the GPUs should have, or is it up to OEMs? I see EVGA, MSI, ZOTAC etc rebuild desktop GPUs like 7970/680 and add more RAM if its low and stuff like that. Can OEMs do the same with notebook GPUs? -
To the best of my knowledge, with notebooks, the GPU makers give OEMs hardware configurations from which to choose, and then they place orders.
Once in hand, they can adjust the clocks to fit their machine's thermal envelopes.
p.s. - if Eurocom has a 192-bit 660M, that's only because Clevo is supplying them to all of its resellers. That company always wants to pretend like they're on some exclusive stuff, but they never have anything Clevo doesn't hand out to everyone at the same time. -
660m won't beat 670m, from a company point of view, it is impossible to charge less for better product, you won't profit, simple. (this is IF 660m is cheaper than 670m)
-
dunno, just sounds more reasonable that way -
-
How is a 660M going to beat a 670M? It has 384 Kepler cores which are about 1/3 as powerful as the Fermi ones, of which the 670M has 336 of. The 660M's higher clock makes up for most of that gap, but it won't be enough to catch the 670M.
-
-
Meaker@Sager Company Representative
GTX470M > GTX480M -
-
AlwaysSearching Notebook Evangelist
nVidia still shows specs as 128-bit. -
TheBluePill Notebook Nobel Laureate
If the Die is the same size on all of the chips, the cost to produce them would be the same. There are X-number of Dies on an X-Sized wafer. Now.. The higher end parts can suffer from yield issues due to the odds of a defect go up with the number of transistors...
So in a nut shell, the Cost isn't necessarily to add or subtract more cores, its costs the same to produce a say 200x200mm die, regardless of the core count. The difference is in how many of those produced parts are not defective or meet spec.
But.. it makes we wonder how nVidia and ATI produce. -
Meaker@Sager Company Representative
Die harvesting is all about the yield bell curve and extending your parameters to get as many cores into products.
Though I would like to point out the 470M and 480M used different cores. -
//Abandon all hope yea who enter beyond this point
Disclaimer: These are the ramblings of an bored young man. Proceed with caution. I do not know if it works like this. I like to play with math and numbers. Think of this post as something the guy from the movie "A beautiful mind" would do, except my mind isn`t beautiful. Take it with a truckload of salt
EDIT: I also think this comparison is a wrong way to do it but I just wrote so many lines so I just went ahead and posted it anyway.It looked like 192bit 660M still cannot surpass or match the 570M based on this. Also I would like to know what exactly makes this wrong since I don`t get it.
GT 640M = GT 555M in performance
GT 640M:
384 CUDA cores @ 625MHz
384 x 625 MHz = 240 000
GT 555M:
144 CUDA cores @ 709 MHz
144 x 709 MHz = 102 096
240000/102096 = 0.425
Fermi core get 42.5% of the Kepler but perform the same.
GT 650M = GTX 560M in performance
GT 650M:
384 CUDA cores @ 850MHz
384 x 850 = 326400
GTX 560M
192 CUDA cores @ 775MHz
192 x 775MHz = 148 800
148800/326400 = 0.455
Fermi core get 45.5% of the Kepler but perform the same
Average: (45.5 + 42.5)/2= 44%
Performance of a 128bit GTX 660M?
GTX 660M:
384 CUDA cores @ 835MHz
384 x 835MHz = 320 640
GTX 570M
336 CUDA cores @ 575MHz
336 x 575MHz = 193 200
GTX 670M
336 CUDA cores @ 598MHz
336 x 598MHz = 200 928
We make shure that these calculations represent the reality of atleast some degree. Difference between 570M and 670M:
193 200/200 928 = 0.96 = 96%
570M perform 96% of 670M, or 670M perform 4% better than 570M.
Sounds plausible
Difference between 570M and 660M:
193200/320640 = 0.60 = 60%
Difference between 670M and 660M:
200928/320640 = 0.626 = 63%
Now what we learned from our comparison between 640M and 555M, 650M and 560M, is that in order for the two GPUs to match in performance we need 44% difference between Kepler and Fermi. Right now we are over that, which means that the 660M wont cut it in terms of raw performance looking at cores.
GTX 560M saw 15% better gaming performance when going from 128bit to 192bit. The memory bandwidth increased by 50%.
As mentioned previously a 128bit 660M have a bandwidth of 64GB/s while 192bit 660M would have 96GB/s. That is exactly an increase of 50%. Just like 560M saw.
So lets adjust the "performance" of the 660M:
We previously saw 320640, going from 128bit to 192bit will give us 15% better performance:
320640 x 1.15 = 368736
Comparison between 570M and 192bit 660M:
193200/368736 = 0.524 = 52%
Comparison between 670M and 192bit 660M:
200928/368736 = 0.545 = 55%
So in conclusion: A 192bit 660M still can`t match a 570M, since it still have to go down to 44%, down 8%. -
-
There was so many numbers I got lost in my own madness. Thanks for pointing it out.
Anandtech also commented that 650M in specs should surpass 660M but there had to be some other things involved.
+rep btw -
-
Right, you`re right. You saw it at Nvidia 660M product page maybe?. I only compared cores and GPU clock with the above, not memory, which makes it a little more valid. Or it is what makes it wrong lol. But I can`t point out exactly what and why.
-
I think the amount of GPU Turbo applied should be taken into consideration. Up to TDP specified or heat.
-
-
From the Nvidia site for 650M:
GPU Engine Specs:
384 coresCUDA Cores
850 MHz with DDR3/735MHz with GDDR5Graphics Clock (MHz)
Up to 27.2Texture Fill Rate (billion/sec)
So the DDR5 version is being limited to 735MHZ core speed to make sure it's slower than the 660M. Looks like 650M owners might be able to OC up to 660M speeds without too much trouble (assuming the cooling can handle it). -
GeForce GT 650M - GeForce
This already shows that the 660m has the extra bit of power. -
-
No 192bit for Asus after all.
-
Does anybody know where the first place to find information on the TDPs will be?
In particular looking at the TDP for the gk107 (GTX 660m).
read through the Kepler white paper but it only gives TDPs for the desktop chips. Nothing yet about the mobile chips. -
Meaker@Sager Company Representative
Why? The TDP does not mean real power consumption.
-
But its nice to know Meaker
. How do one calculate/find out TDP?
There is an early leak from semiaccurate that suggested 40-45W for the GTX 660M. God knows if that is accurate
-
TDP is important for GPU Boost. If the 640M has 25W TDP, 650M has 35W TDP, for example, then 650M will boost for longer and at higher clock speeds.
-
Meaker@Sager Company Representative
Desktop cards let you adjust that.....
-
Yes thats the reason why 650M have higher TDP because it reach higher clocks than 640M. I am not shure if Nvidia have built a turbo design that throttles down after a while though because of heat. I hope not.
With desktop cards you can pretty much do whatever you want as long as you have proper cooling. Notebooks have a very small overclocking limit due to the small and compact design. Oh god how I wish this could come soon -
TheBluePill Notebook Nobel Laureate
-
Question is: Will the TDP be configurable? Arrandale was fun...
-
The reason I'm asking is because for the laptop that I'm waiting to be refreshed with Kepler cores, TDP is the limiting factor. So if I know the TDP of the 660m, then I know if there's a good chance if it will be in the refresh.
So again, any information on where the first place will be to have that information? -
-
m14x r1, being upgraded to r2. But don't pay attention to that. I just want to know where the latest info on TDPs will be posted
-
There is no definite TDP, but there are some estimates:
ÏÂÒ»´ú¸ß¶ËÏÔ¿¨È«¹æ¸ñ¼°ÐÔÄܱ©¹â£¨AMD Radeon HD 7970M vs NVIDIA GeForce GTX 680M£©_´÷¶ûAlienwareÍâÐÇÈËÂÛ̳ -
Aren't you forgetting something? The new Kepler GPUs all have improved AA features that will maintain AA quality with lower performance loss, called FXAA and TXAA. They can also work together with traditional AA, making games look even better. FXAA is available regardless of the GPU brand, but TXAA is restricted to nVIDIA GPUs as of yet and needs to be implemented in the games. The improved features of the 600 series are hardware related.
"For example if a game is performing poorly because 2X, or 4X, or 8X AA is reducing performance by a great deal, just turn on FXAA instead and get 4X AA image quality at no performance hit".
"The first slide above shows you how NVIDIA has positioned the performance and image quality of TXAA. With TXAA 1 enabled NVIDIA claims you'll get better the quality of 8X MSAA (or slightly better) but at the same performance level of 2X MSAA".
This is shown to actually be true or at least within the performance promised by nVIDIA.
Source
AA vs. FXAA in The Elder Scrolls V: Skyrim -
TheBluePill Notebook Nobel Laureate
Code: -
-
FXAA can be set to either ON or OFF in the nVIDIA Control Panel.
TXAA is based on temporary super-sampling, but without the requirements of classic super-sampling. Here's a tech demo, although you should take it with a grain of salt: NVIDIA TXAA Techdemo - YouTube
As far as I know, there are no TXAA options in the nVIDIA Control Panel, so since the game must support it for it to work I assume you'll have to enable it through the game's graphics settings. -
Correct. Only FXAA is something you can enable through the Control panel. TXAA is brand new and have not been incorporated in games yet, which means that we must wait for updates for the games already out and for developers to incorporate it in the upcoming games.
TXAA is just software based btw, which means that in theory Fermi should be able to use it too. But who knows if Nvidia will allow it for Fermis. Its will probably be something exclusive for Kepler to get more sales -
Yes, the technology is software related, but the hardware in the Kepler series is optimized for it. Maybe it would be more correct to say it's optimized software... I don't know
-
Yeah, I`m not doubting that Kepler is optimized for that at all.
Pretty awesome technology btw -
Yes, I also think it would be possible to use it with the Fermi GPUs, but since they are not optimized for it I don't know if the performance would be the same. But... The most reasonable guess would be that nVIDIA tries to convince buyers that they need to get the new cards to get the new tech, like you said
SHOW ME THE MONEY!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
-
Any information yet on the thermals for the GTX 660m (not just Kepler generally)
-
-
What was the supposed release date of that 660M card again and aside from the slightly higher clock, why go for it rather than a 550M?
The issue: I was looking into a new 9130 but was a little dissapointed to see you couldn't back up the card to a 650M which i prefer to the 670M (rebadged 580 i believe). My question about the 650M refers the new 6165 notebook. It's almost a $500 difference, lighter, still sports a 1080 matte screen, is upgradable to the 3rd gen processor... blah blah, but:
1) Can the 550M GDDR5 handle 1080 gaming? And,
2) Although no photos are available yet, considering the old 5165 model pictures, is overheating going to be an issue with this labtop (OCing taken into consideration)?
Thanks -
Kingpinzero ROUND ONE,FIGHT! You Win!
670m is a rebadged 570m, 675m is a rebadged 580m.
As tour question:
1) nope. Those cards are middle entry cards that performs quite well at hd+ resolutions such 1366x768 or 720p. Gaming at fullhd will require to set games to its lower settings but that doesn't mean they will run at playable speed.
550m and 555m are ideal for 768p/720p panels as its the sweet spot for their segment.
2) heat is always an issue but in these notebooks is not quite a big problem. Laptop gets warm but not incandescent, it's tolerable.
My personal advice is to forget about 550m since its an old/slow tech and go for the 650m specially if it's gddr5. Those new entry level Kepler cards has clearly a boost in performance compared to fermi 550/555m, and based on what you want it to do (gaming at 1080p I suppose) it's well worth the investiment.
But be sure to not get the ddr3 version regardless of your choice /gpu.
NVIDIA Geforce GTX 660M Release Information + CUDA Core Count
Discussion in 'Gaming (Software and Graphics Cards)' started by yknyong1, Mar 22, 2012.