The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    NVIDIA Geforce GTX 660M Release Information + CUDA Core Count

    Discussion in 'Gaming (Software and Graphics Cards)' started by yknyong1, Mar 22, 2012.

  1. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    I don`t quite get it:

    How do they make 128bit with 3GB?
    Each VRAM is 32bit

    32bit x 6 = 192bit
    256MB x 6 = 1.5GB
    512MB x 6 = 3GB

    32bit x 4 = 128bit
    256MB x 4 = 1GB
    512MB x 4 = 2GB
     
  2. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    I did the math wrong.

    This is interesting and stupid at the same time, because the 660M might crush the 670M. If true it confirms my initial statement, which was that Nvidia shouldn't have rebranded the 570M in the first place.
     
  3. ryzeki

    ryzeki Super Moderator Super Moderator

    Reputations:
    6,552
    Messages:
    6,410
    Likes Received:
    4,087
    Trophy Points:
    431
    Well the 570m, and 670m by default, are very underclocked so the 660m should have little problems getting near or even above the performance of 570m stock.
     
  4. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    So Eurocom will have a 1.5GB 192bit 660M, Asus will offer two versions: 3GB 192bit 660M and 2GB 128bit 660M.
    670M config cost $260 more than 660M. If a 192bit 660M beats the 670M, then shouldn`t it cost more than 670M?

    Bandwidths:
    GTX 675M/GTX 580M 256bit GDDR5: 96GB/s
    GTX 660M 192bit GDDR5: 96GB/s
    GTX 670M/570M 192bit GDDR5: 72GB/s
    GTX 660M 128bit GDDR5: 64GB/s
    GTX 560M 192bit GDDR5: 60GB/s

    Stupid question: Is it Nvidia who decide what memory configurations the GPUs should have, or is it up to OEMs? I see EVGA, MSI, ZOTAC etc rebuild desktop GPUs like 7970/680 and add more RAM if its low and stuff like that. Can OEMs do the same with notebook GPUs?
     
  5. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    To the best of my knowledge, with notebooks, the GPU makers give OEMs hardware configurations from which to choose, and then they place orders.

    Once in hand, they can adjust the clocks to fit their machine's thermal envelopes.

    p.s. - if Eurocom has a 192-bit 660M, that's only because Clevo is supplying them to all of its resellers. That company always wants to pretend like they're on some exclusive stuff, but they never have anything Clevo doesn't hand out to everyone at the same time.
     
  6. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    660m won't beat 670m, from a company point of view, it is impossible to charge less for better product, you won't profit, simple. (this is IF 660m is cheaper than 670m)
     
  7. awakeN

    awakeN Notebook Deity

    Reputations:
    616
    Messages:
    1,067
    Likes Received:
    4
    Trophy Points:
    56
    shouldn't pricing really be about the cost of manufacturing opposed to the performance of the card? The 28nm tech should be cheaper to manufacture than the old 40nm.

    dunno, just sounds more reasonable that way
     
  8. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    why? development of new tech (in our case 28nm) costs something (a LOT more than the mass-production stage), and they MUST break that cost even (micro economics) into the profit region (or feasibility analysis).. these companies do NOT produce tech for the sake of population, they do it for profit...
     
  9. GTRagnarok

    GTRagnarok Notebook Evangelist

    Reputations:
    556
    Messages:
    542
    Likes Received:
    45
    Trophy Points:
    41
    How is a 660M going to beat a 670M? It has 384 Kepler cores which are about 1/3 as powerful as the Fermi ones, of which the 670M has 336 of. The 660M's higher clock makes up for most of that gap, but it won't be enough to catch the 670M.
     
  10. pterodactilo

    pterodactilo Notebook Consultant

    Reputations:
    11
    Messages:
    225
    Likes Received:
    7
    Trophy Points:
    31
    I've read somewhere that the ratio is more like 1/2. I wonder why 640, 650 and 660M all have the same number of cores. Probably more cores would be too much for a GDDR5/DDR3 128 bits memory interface.
     
  11. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    Playing devils advocate here:

    GTX470M > GTX480M
     
  12. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    I am not talking about performance.. manufacturing cost is entirely different concept, and 480m cost more than 470m (more CUDA cores).
     
  13. AlwaysSearching

    AlwaysSearching Notebook Evangelist

    Reputations:
    164
    Messages:
    334
    Likes Received:
    15
    Trophy Points:
    31
    When did they announce 660M 192-bit?

    nVidia still shows specs as 128-bit.
     
  14. TheBluePill

    TheBluePill Notebook Nobel Laureate

    Reputations:
    636
    Messages:
    889
    Likes Received:
    0
    Trophy Points:
    30
    I have to wonder. Just how many different Dies does nVidia actually stock? I know in the CPU world, they will essentially produce the same chip, and simply disable bits and pieces of it for the lower end parts, and bin all of them by their performance (or quality).

    If the Die is the same size on all of the chips, the cost to produce them would be the same. There are X-number of Dies on an X-Sized wafer. Now.. The higher end parts can suffer from yield issues due to the odds of a defect go up with the number of transistors...

    So in a nut shell, the Cost isn't necessarily to add or subtract more cores, its costs the same to produce a say 200x200mm die, regardless of the core count. The difference is in how many of those produced parts are not defective or meet spec.

    But.. it makes we wonder how nVidia and ATI produce.
     
  15. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    Well now here is an interesting question....

    Die harvesting is all about the yield bell curve and extending your parameters to get as many cores into products.

    Though I would like to point out the 470M and 480M used different cores.
     
  16. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    //Abandon all hope yea who enter beyond this point

    Disclaimer: These are the ramblings of an bored young man. Proceed with caution. I do not know if it works like this. I like to play with math and numbers. Think of this post as something the guy from the movie "A beautiful mind" would do, except my mind isn`t beautiful. Take it with a truckload of salt :p

    EDIT: I also think this comparison is a wrong way to do it but I just wrote so many lines so I just went ahead and posted it anyway. :p It looked like 192bit 660M still cannot surpass or match the 570M based on this. Also I would like to know what exactly makes this wrong since I don`t get it. :p

    GT 640M = GT 555M in performance

    GT 640M:
    384 CUDA cores @ 625MHz
    384 x 625 MHz = 240 000

    GT 555M:
    144 CUDA cores @ 709 MHz
    144 x 709 MHz = 102 096

    240000/102096 = 0.425
    Fermi core get 42.5% of the Kepler but perform the same.

    GT 650M = GTX 560M in performance

    GT 650M:
    384 CUDA cores @ 850MHz
    384 x 850 = 326400

    GTX 560M
    192 CUDA cores @ 775MHz
    192 x 775MHz = 148 800

    148800/326400 = 0.455
    Fermi core get 45.5% of the Kepler but perform the same

    Average: (45.5 + 42.5)/2= 44%


    Performance of a 128bit GTX 660M?

    GTX 660M:
    384 CUDA cores @ 835MHz
    384 x 835MHz = 320 640

    GTX 570M
    336 CUDA cores @ 575MHz
    336 x 575MHz = 193 200

    GTX 670M
    336 CUDA cores @ 598MHz
    336 x 598MHz = 200 928

    We make shure that these calculations represent the reality of atleast some degree. Difference between 570M and 670M:
    193 200/200 928 = 0.96 = 96%
    570M perform 96% of 670M, or 670M perform 4% better than 570M.
    Sounds plausible

    Difference between 570M and 660M:
    193200/320640 = 0.60 = 60%

    Difference between 670M and 660M:
    200928/320640 = 0.626 = 63%

    Now what we learned from our comparison between 640M and 555M, 650M and 560M, is that in order for the two GPUs to match in performance we need 44% difference between Kepler and Fermi. Right now we are over that, which means that the 660M wont cut it in terms of raw performance looking at cores.

    GTX 560M saw 15% better gaming performance when going from 128bit to 192bit. The memory bandwidth increased by 50%.
    As mentioned previously a 128bit 660M have a bandwidth of 64GB/s while 192bit 660M would have 96GB/s. That is exactly an increase of 50%. Just like 560M saw.

    So lets adjust the "performance" of the 660M:
    We previously saw 320640, going from 128bit to 192bit will give us 15% better performance:
    320640 x 1.15 = 368736

    Comparison between 570M and 192bit 660M:
    193200/368736 = 0.524 = 52%

    Comparison between 670M and 192bit 660M:
    200928/368736 = 0.545 = 55%

    So in conclusion: A 192bit 660M still can`t match a 570M, since it still have to go down to 44%, down 8%.
     
  17. rorkas

    rorkas Notebook Consultant

    Reputations:
    118
    Messages:
    280
    Likes Received:
    0
    Trophy Points:
    30
    Cloudfire, according to your estimates 660M loses even to 650M. I think you are overthinking this ;)
     
  18. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    There was so many numbers I got lost in my own madness. Thanks for pointing it out. :D
    Anandtech also commented that 650M in specs should surpass 660M but there had to be some other things involved.

    +rep btw
     
  19. rorkas

    rorkas Notebook Consultant

    Reputations:
    118
    Messages:
    280
    Likes Received:
    0
    Trophy Points:
    30
    If I remember correctly (I'm reasonably sure I read it somewhere) DDR5 650M will be 750 MHz, while 850MHz was for DDR3 only. Not sure what prevents one from OCing DDR5 though.
     
  20. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Right, you`re right. You saw it at Nvidia 660M product page maybe?. I only compared cores and GPU clock with the above, not memory, which makes it a little more valid. Or it is what makes it wrong lol. But I can`t point out exactly what and why. :)
     
  21. yknyong1

    yknyong1 Radiance with Radeon

    Reputations:
    1,191
    Messages:
    2,095
    Likes Received:
    8
    Trophy Points:
    56
    I think the amount of GPU Turbo applied should be taken into consideration. Up to TDP specified or heat.
     
  22. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    who said that quality is not a part of the manufacturing cost? recently, I contacted nvidia regarding quadro/tesla compared to gtx, and already got a deeper answer summarizing the difference being the quality of the end product (the manufacturing process) is entirely different, which tesla/quadro products lasts much longer than a gtx of same specifications under heavy load.. that is why they cost 5times the price, otherwise no one is stupid making corporate decisions and they pay 5 times the price because they NEED the gpu clusters running 24/7 (not our casual 2 hours gaming and shutting the computer).. I also gained a lot more information but it is confidential and I was asked not to undisclose it (still not privileged information though, very simple)
     
  23. lordbaldric

    lordbaldric Notebook Consultant

    Reputations:
    2
    Messages:
    203
    Likes Received:
    6
    Trophy Points:
    31
    From the Nvidia site for 650M:

    GPU Engine Specs:
    384 coresCUDA Cores
    850 MHz with DDR3/735MHz with GDDR5Graphics Clock (MHz)
    Up to 27.2Texture Fill Rate (billion/sec)


    So the DDR5 version is being limited to 735MHZ core speed to make sure it's slower than the 660M. Looks like 650M owners might be able to OC up to 660M speeds without too much trouble (assuming the cooling can handle it).
     
  24. nissangtr786

    nissangtr786 Notebook Deity

    Reputations:
    85
    Messages:
    865
    Likes Received:
    0
    Trophy Points:
    0
  25. yknyong1

    yknyong1 Radiance with Radeon

    Reputations:
    1,191
    Messages:
    2,095
    Likes Received:
    8
    Trophy Points:
    56
    Expected given that the core clock is raised by 100MHz.
     
  26. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    No 192bit for Asus after all.

     
  27. maliusmaximus

    maliusmaximus Notebook Enthusiast

    Reputations:
    0
    Messages:
    28
    Likes Received:
    0
    Trophy Points:
    5
    Does anybody know where the first place to find information on the TDPs will be?
    In particular looking at the TDP for the gk107 (GTX 660m).
    read through the Kepler white paper but it only gives TDPs for the desktop chips. Nothing yet about the mobile chips.
     
  28. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    Why? The TDP does not mean real power consumption.
     
  29. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    But its nice to know Meaker :). How do one calculate/find out TDP?

    There is an early leak from semiaccurate that suggested 40-45W for the GTX 660M. God knows if that is accurate
    [​IMG]
     
  30. yknyong1

    yknyong1 Radiance with Radeon

    Reputations:
    1,191
    Messages:
    2,095
    Likes Received:
    8
    Trophy Points:
    56
    TDP is important for GPU Boost. If the 640M has 25W TDP, 650M has 35W TDP, for example, then 650M will boost for longer and at higher clock speeds.
     
  31. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    Desktop cards let you adjust that.....
     
  32. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Yes thats the reason why 650M have higher TDP because it reach higher clocks than 640M. I am not shure if Nvidia have built a turbo design that throttles down after a while though because of heat. I hope not.

    With desktop cards you can pretty much do whatever you want as long as you have proper cooling. Notebooks have a very small overclocking limit due to the small and compact design. Oh god how I wish this could come soon :)
     
  33. TheBluePill

    TheBluePill Notebook Nobel Laureate

    Reputations:
    636
    Messages:
    889
    Likes Received:
    0
    Trophy Points:
    30
    That is a good point I think a lot of people overlook.
     
  34. yknyong1

    yknyong1 Radiance with Radeon

    Reputations:
    1,191
    Messages:
    2,095
    Likes Received:
    8
    Trophy Points:
    56
    Question is: Will the TDP be configurable? Arrandale was fun...
     
  35. maliusmaximus

    maliusmaximus Notebook Enthusiast

    Reputations:
    0
    Messages:
    28
    Likes Received:
    0
    Trophy Points:
    5
    The reason I'm asking is because for the laptop that I'm waiting to be refreshed with Kepler cores, TDP is the limiting factor. So if I know the TDP of the 660m, then I know if there's a good chance if it will be in the refresh.
    So again, any information on where the first place will be to have that information?
     
  36. yknyong1

    yknyong1 Radiance with Radeon

    Reputations:
    1,191
    Messages:
    2,095
    Likes Received:
    8
    Trophy Points:
    56
    What laptop is that?
     
  37. maliusmaximus

    maliusmaximus Notebook Enthusiast

    Reputations:
    0
    Messages:
    28
    Likes Received:
    0
    Trophy Points:
    5
    m14x r1, being upgraded to r2. But don't pay attention to that. I just want to know where the latest info on TDPs will be posted
     
  38. yknyong1

    yknyong1 Radiance with Radeon

    Reputations:
    1,191
    Messages:
    2,095
    Likes Received:
    8
    Trophy Points:
    56
  39. StigtriX

    StigtriX Notebook Enthusiast

    Reputations:
    22
    Messages:
    37
    Likes Received:
    0
    Trophy Points:
    15
    Aren't you forgetting something? The new Kepler GPUs all have improved AA features that will maintain AA quality with lower performance loss, called FXAA and TXAA. They can also work together with traditional AA, making games look even better. FXAA is available regardless of the GPU brand, but TXAA is restricted to nVIDIA GPUs as of yet and needs to be implemented in the games. The improved features of the 600 series are hardware related.

    "For example if a game is performing poorly because 2X, or 4X, or 8X AA is reducing performance by a great deal, just turn on FXAA instead and get 4X AA image quality at no performance hit".

    "The first slide above shows you how NVIDIA has positioned the performance and image quality of TXAA. With TXAA 1 enabled NVIDIA claims you'll get better the quality of 8X MSAA (or slightly better) but at the same performance level of 2X MSAA".

    This is shown to actually be true or at least within the performance promised by nVIDIA.

    Source
    AA vs. FXAA in The Elder Scrolls V: Skyrim
     
  40. TheBluePill

    TheBluePill Notebook Nobel Laureate

    Reputations:
    636
    Messages:
    889
    Likes Received:
    0
    Trophy Points:
    30
    Code:
    
         
    I am curious if this will be an "Always On" Feature of 600 Kepler parts?
     
  41. maxheap

    maxheap caparison horus :)

    Reputations:
    1,244
    Messages:
    3,294
    Likes Received:
    191
    Trophy Points:
    131
    nvidia FXAA is always on feature, no idea about TXAA as only Kepler will support it, I just tried FXAA and the results are good (especially adaptive v-sync is really nice)
     
  42. StigtriX

    StigtriX Notebook Enthusiast

    Reputations:
    22
    Messages:
    37
    Likes Received:
    0
    Trophy Points:
    15
    FXAA can be set to either ON or OFF in the nVIDIA Control Panel.

    TXAA is based on temporary super-sampling, but without the requirements of classic super-sampling. Here's a tech demo, although you should take it with a grain of salt: NVIDIA TXAA Techdemo - YouTube
    As far as I know, there are no TXAA options in the nVIDIA Control Panel, so since the game must support it for it to work I assume you'll have to enable it through the game's graphics settings.
     
  43. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Correct. Only FXAA is something you can enable through the Control panel. TXAA is brand new and have not been incorporated in games yet, which means that we must wait for updates for the games already out and for developers to incorporate it in the upcoming games.

    TXAA is just software based btw, which means that in theory Fermi should be able to use it too. But who knows if Nvidia will allow it for Fermis. Its will probably be something exclusive for Kepler to get more sales
     
  44. StigtriX

    StigtriX Notebook Enthusiast

    Reputations:
    22
    Messages:
    37
    Likes Received:
    0
    Trophy Points:
    15
    Yes, the technology is software related, but the hardware in the Kepler series is optimized for it. Maybe it would be more correct to say it's optimized software... I don't know :p
     
  45. Cloudfire

    Cloudfire (Really odd person)

    Reputations:
    7,279
    Messages:
    10,304
    Likes Received:
    2,878
    Trophy Points:
    581
    Yeah, I`m not doubting that Kepler is optimized for that at all. :)

    Pretty awesome technology btw
     
  46. StigtriX

    StigtriX Notebook Enthusiast

    Reputations:
    22
    Messages:
    37
    Likes Received:
    0
    Trophy Points:
    15
    Yes, I also think it would be possible to use it with the Fermi GPUs, but since they are not optimized for it I don't know if the performance would be the same. But... The most reasonable guess would be that nVIDIA tries to convince buyers that they need to get the new cards to get the new tech, like you said ;)

    SHOW ME THE MONEY!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    [​IMG]
     
  47. maliusmaximus

    maliusmaximus Notebook Enthusiast

    Reputations:
    0
    Messages:
    28
    Likes Received:
    0
    Trophy Points:
    5
    Any information yet on the thermals for the GTX 660m (not just Kepler generally)
     
  48. Red Line

    Red Line Notebook Deity

    Reputations:
    1,109
    Messages:
    1,289
    Likes Received:
    141
    Trophy Points:
    81
    Source. So no problem using this feature on Fermi GPU's)
     
  49. kinki

    kinki Notebook Guru

    Reputations:
    0
    Messages:
    53
    Likes Received:
    0
    Trophy Points:
    15
    What was the supposed release date of that 660M card again and aside from the slightly higher clock, why go for it rather than a 550M?

    The issue: I was looking into a new 9130 but was a little dissapointed to see you couldn't back up the card to a 650M which i prefer to the 670M (rebadged 580 i believe). My question about the 650M refers the new 6165 notebook. It's almost a $500 difference, lighter, still sports a 1080 matte screen, is upgradable to the 3rd gen processor... blah blah, but:

    1) Can the 550M GDDR5 handle 1080 gaming? And,

    2) Although no photos are available yet, considering the old 5165 model pictures, is overheating going to be an issue with this labtop (OCing taken into consideration)?

    Thanks
     
  50. Kingpinzero

    Kingpinzero ROUND ONE,FIGHT! You Win!

    Reputations:
    1,439
    Messages:
    2,332
    Likes Received:
    6
    Trophy Points:
    55
    670m is a rebadged 570m, 675m is a rebadged 580m.
    As tour question:

    1) nope. Those cards are middle entry cards that performs quite well at hd+ resolutions such 1366x768 or 720p. Gaming at fullhd will require to set games to its lower settings but that doesn't mean they will run at playable speed.
    550m and 555m are ideal for 768p/720p panels as its the sweet spot for their segment.

    2) heat is always an issue but in these notebooks is not quite a big problem. Laptop gets warm but not incandescent, it's tolerable.

    My personal advice is to forget about 550m since its an old/slow tech and go for the 650m specially if it's gddr5. Those new entry level Kepler cards has clearly a boost in performance compared to fermi 550/555m, and based on what you want it to do (gaming at 1080p I suppose) it's well worth the investiment.
    But be sure to not get the ddr3 version regardless of your choice /gpu.
     
← Previous pageNext page →