I wasn't sure where else to ask this question as it relates to both desktop and notebook so... Why did nVIDIA not stick with 200 or 9000 series and build upon it, instead it looks like the the newer fermi chips are crippled with less texture units and rops example:
GTX 580, 512 shaders, 64 texture, 48 rops.
GTX 280, 240 shaders, 80 texture, 32 rops.
Should the GTX 580 not have 512, 128, 64? if it was related to the 200/9000 series?.
hope you understand my question.
-
Anthony@MALIBAL Company Representative
You can't directly compare them because of the big change in architecture and shrink in die size. For the same reason- you can't directly compare say, Intel Sandy Bridge to the Core series or Pentiums. It really is an apples to oranges thing.
If you check actual performance, the 580 isn't "crippled"- it's nearly 5x faster than the 280, which is huge. The 200 series wasn't Fermi and just doesn't even come close in performance. Also, if you follow Nvidia naming schemes, they typically increment the scheme after ever new release. Last year was the 4xx series, while this year is the 5xx series. It makes it easier to tell the generations apart. When they do things like the 485m, it just muddies the water.
nVIDIA 8xxx/9xxx/2xx
Discussion in 'Sager and Clevo' started by japalax, Oct 16, 2011.