The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
 Next page →

    5870 vs 460m. Some data from notebookcheck.

    Discussion in 'Gaming (Software and Graphics Cards)' started by Lieto, Mar 4, 2011.

  1. Lieto

    Lieto Notebook Deity

    Reputations:
    108
    Messages:
    896
    Likes Received:
    3
    Trophy Points:
    31
    I see a lot of people choosing between those 2 in "what should i buy" and many actually choose 460m. Why? Is there some stability problem with 5870? Because the performance is around 10-25% higher.

    Lets say Metro 2033 wont even run on "high" smoothly with 460m. It will with 5870. I am close to saying that 5870 is the GPU king for its price atm. It can run EVERY game on high and its fairly cheap. Am i missing something?

    Mafia 2 Ultra settings:
    460M - ~33 FPS (33/32.9)
    5870M - 41.1 FPS
    +19.71% to the 5870M

    Starcraft 2 Ultra settings:
    460M - ~34FPS (31.6/36)
    5870M - 37.2 FPS
    +8.6% to the 5870M

    Metro 2033 Ultra settings:
    460M - ~9FPS (9/9.6)
    5870M - 10.6FPS
    +15% to the 5870M
    *neither are playable

    Metro 2033 High settings:
    460M - ~27FPS
    5870M - 33FPS
    +20% to the 5870M

    CoD MW2 Ultra settings:
    460M - ~44 FPS (41.3/47)
    5870M - ~50 FPS (48.7/50.7/51.1)
    +12% to the 5870M

    Battlefield: Bad Company 2 Ultra settings:
    460M ~30 FPS (29/30)
    5870M - ~32 FPS (30.9/32.7/33.1)
    +6.25% to the 5870M

    Anno 1404 Ultra settings:
    460M - 44 FPS
    5870M - 81 FPS
    +45.8% to 5870M

    SIMS 3 High:
    460M - 61 FPS
    5870M - 81 FPS
    + 24.7% to the 5870M

    Crysis GPU Benchmark Ultra:
    460M - ~13FPS (12.9/13)
    5870M - 17.1FPS
    +23.97% to 5870M

    DIRT 2 Ultra Settings:
    460M - ~42FPS (41.6/42.9)
    5870M - ~31 FPS (29.6/30.9/33.8)
    +26% to the 460M

    Based on http://www.notebookcheck.net/Computer-Games-on-Laptop-Graphic-Cards.13849.0.html benchmarks
     
  2. fzhfzh

    fzhfzh Notebook Deity

    Reputations:
    289
    Messages:
    1,588
    Likes Received:
    0
    Trophy Points:
    55
    1. Cuda
    2. PhysX
    3. Laptop no 5870MR option
    4. Less problems (ATI cards often have certain problems with new games and only fixed in next driver update)
    5. Better picture quality (I can see a very big difference between ATI and Nvidia cards, especially for lighting, light spots rendered by nvidia cards look much more natural and smoother).
     
  3. Botsu

    Botsu Notebook Evangelist

    Reputations:
    105
    Messages:
    624
    Likes Received:
    0
    Trophy Points:
    30
    It's great that you can because even tech websites couldn't without nvidia and external observers pointing it to them on screens taken under specific circumstances.
     
  4. fzhfzh

    fzhfzh Notebook Deity

    Reputations:
    289
    Messages:
    1,588
    Likes Received:
    0
    Trophy Points:
    55
    Yes, I can, in games like crysis, dragon age etc, the light spot reflection on characters look very pixellated as though it's a picture defect, while it is fine when viewing on my desktop using 560 ti. It might have something to do with AA.
     
  5. Phistachio

    Phistachio A. Scriabin

    Reputations:
    1,930
    Messages:
    2,588
    Likes Received:
    145
    Trophy Points:
    81
    CUDA and PhysX = ATi EyeSpeed

    4. I actually had a couple of BSODs on my other nvidia laptop, no BSODs in my ATi laptop

    5. Nope, comproved, and shown that ATi has a better picture quality. I have compared my other laptop with nvidia, with my ATi one, ATi wins for miles.
     
  6. Nick

    Nick Professor Carnista

    Reputations:
    3,870
    Messages:
    4,089
    Likes Received:
    652
    Trophy Points:
    181
    I've always found ATi/AMD has worse video drivers, that can be a pain to update. I've never received a BSOD from ATi/AMD or Nvidia drivers.


    4. I don't think because you had a few BSOD on an Nvidia laptop, that makes ATi/AMD better. There are many other things that can make a laptop have a BSOD. Like incorrectly installing video driver for one.

    5. Where has it been proven that either makes games look better?
     
  7. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    The 460M vs Mobility 5870 horse has been beaten to death, several times over. Why restart the debate, when both chips are already phased out?
     
  8. City Pig

    City Pig Notebook Virtuoso

    Reputations:
    483
    Messages:
    2,322
    Likes Received:
    0
    Trophy Points:
    55
    Because they're both still very common? (Well, the 460M at least.)
     
  9. Richteralan

    Richteralan Notebook Evangelist

    Reputations:
    20
    Messages:
    416
    Likes Received:
    11
    Trophy Points:
    31
    people keep saying this for 10 years.

    first of all, what is good picture quality?

    better contrast?
    better saturation?
    better LCD panel?
    brighter?
    no artificial color boost?
    etc...

    From my experience about this subject from last 10 years, ATI tends to provide better contrast and saturation, makes the picture punchier at default setting than NVIDIA.

    After you calibrate both card, nobody can tell a difference.

    I really with this topic about picture quality ATI vs. NVIDIA would die.
     
  10. Lieto

    Lieto Notebook Deity

    Reputations:
    108
    Messages:
    896
    Likes Received:
    3
    Trophy Points:
    31
    Sorry i am new :)) and yes cards are still really common.
     
  11. masterchef341

    masterchef341 The guy from The Notebook

    Reputations:
    3,047
    Messages:
    8,636
    Likes Received:
    4
    Trophy Points:
    206
    I'm not english, but that's bollocks. Both are fine. PS, you need a source to make a claim like this, and an nVidia blog post is not sufficient.
     
  12. GapItLykAMaori

    GapItLykAMaori Notebook Evangelist

    Reputations:
    28
    Messages:
    385
    Likes Received:
    0
    Trophy Points:
    30
    Im pretty sure the percentage increases are wrong. i.e 61 fps 460m 81fps 5870 should be 32.7% not 24.6%
     
  13. daranik

    daranik Notebook Deity

    Reputations:
    57
    Messages:
    865
    Likes Received:
    0
    Trophy Points:
    30
    Does CUDA and PhysX really do much? I mean sure there was mirrors edge but since then I don't think I've seen a shining example to buy into PhysX, its not like BF3 or Crysis 2 are supporting it heavy, or even at all!
     
  14. granyte

    granyte ATI+AMD -> DAAMIT

    Reputations:
    357
    Messages:
    2,346
    Likes Received:
    0
    Trophy Points:
    55
    since physics runs on the cpu 99% of the time it's absolutely not relevant
     
  15. fzhfzh

    fzhfzh Notebook Deity

    Reputations:
    289
    Messages:
    1,588
    Likes Received:
    0
    Trophy Points:
    55
    Cuda is quite huge for people doing video rendering/ picture editing, until autodesk/adobe/etc starts to support openCL that is.
     
  16. jenesuispasbavard

    jenesuispasbavard Notebook Evangelist

    Reputations:
    51
    Messages:
    412
    Likes Received:
    2
    Trophy Points:
    31
    I have a GTX 560 Ti in my desktop and a Radeon HD 6650M in my notebook and I see absolutely no difference in visual quality on the _same_ monitor.

    And yes, until OpenCL comes around, CUDA will always be my preference for things like SETI@Home.
     
  17. Amnesiac

    Amnesiac 404

    Reputations:
    1,312
    Messages:
    3,433
    Likes Received:
    20
    Trophy Points:
    106
    You did test with an external monitor and a DVI connection right?... you can't honestly tell me that you looked at both of the cards' color quality and negated the characteristics of the screens involved...
     
  18. ryzeki

    ryzeki Super Moderator Super Moderator

    Reputations:
    6,552
    Messages:
    6,410
    Likes Received:
    4,088
    Trophy Points:
    431
    HD5870m is a better card, but that does not stop the 460m from being a great alternative.
     
  19. jacob808

    jacob808 Notebook Deity

    Reputations:
    52
    Messages:
    1,002
    Likes Received:
    0
    Trophy Points:
    55
    Correct me if I'm wrong, but my understanding from reading alot of articles and benchmarks on Nvidia, and ATI cards is that even though the ATI cards reach a higher maximum frames per second than the Nvidia counterpart, the Nvidia cards sustain a more consitent average framerate and Nvidia's cards has a higher minimum framerate than the same competing card from ATI. If this is true than Nvidia's cards have the power where it's needed, because in the end it doesn't matter if you can hit 100 frames per second, but still keep dropping below 20 fps on a frequent basis.
     
  20. darth voldemort

    darth voldemort Notebook Evangelist

    Reputations:
    17
    Messages:
    300
    Likes Received:
    0
    Trophy Points:
    30
    on the contrary, just my opinion but when I had an nvidia card my frames would go all over the frikkin place! like crazy! I still don't know why but it might have been what is called throttling, though I didn't know throttling happened on the GT 240m. When I got my 5870m, I was amazed at about half the range of framerates in all games. in other words, I am claiming that nvidia cards do not have better frame ranges than ATI in my experience.
     
  21. xxERIKxx

    xxERIKxx Notebook Deity

    Reputations:
    159
    Messages:
    1,488
    Likes Received:
    0
    Trophy Points:
    55
    Minimum fps is usually impacted by the CPU the most in my opinion.
     
  22. GapItLykAMaori

    GapItLykAMaori Notebook Evangelist

    Reputations:
    28
    Messages:
    385
    Likes Received:
    0
    Trophy Points:
    30
    Your right they dont, the simple fact is that both sides get mega fps dips. Only way to counter this is by clocking ur cpu (which will only help to a certain extent) or get a more powerful gpu. Cards like the 5870m is considered a mid range card and just doesnt cut it if you dont want dips.
     
  23. darth voldemort

    darth voldemort Notebook Evangelist

    Reputations:
    17
    Messages:
    300
    Likes Received:
    0
    Trophy Points:
    30
    using entire same computer just changed the gpu
     
  24. darth voldemort

    darth voldemort Notebook Evangelist

    Reputations:
    17
    Messages:
    300
    Likes Received:
    0
    Trophy Points:
    30
    I actually heard that ram can cause fps dips, but it was the same cpu. I know the 5870m is only midrange since its a laptop card but wouldn't you still get dips on a desktop 6990? I thought dips were caused by the rendering of bullets or a whole new scene such as outside versus inside.
     
  25. Hedonist

    Hedonist Notebook Evangelist

    Reputations:
    30
    Messages:
    403
    Likes Received:
    0
    Trophy Points:
    30
    This is true. Love what you want to love, right? :)

    I have both Nvidia and ATI top of the line...I love them both. But my ATI is newer, therefore I conclude, I love ATI more. Aww...Just kidding I love those 2.
     
  26. rschauby

    rschauby Superfluously Redundant

    Reputations:
    865
    Messages:
    1,560
    Likes Received:
    0
    Trophy Points:
    55
    I own a laptop with a 5870M so by that fact alone it is FAR superior to the 460M.
     
  27. jacob808

    jacob808 Notebook Deity

    Reputations:
    52
    Messages:
    1,002
    Likes Received:
    0
    Trophy Points:
    55
    when I refer to "dips" in frame rates, I'm talking about when the CPU/GPU is trying to hold a consistent, say 30 fps since 30fps is ideal for smooth fluid movement, then when a huge fire fight or alot of graphics and details need to be calculated onscreen every second while trying to draw every detail in sync, pixel for pixel when you move your first person perspective to look in different directions, the CPU/GPU has a hard time doing drawing 30 frames in a second and takes a little longer and so the frame rate drops to maybe 14 frames calculated and drawn onscreen in a second. These situations you can use as a benchmark, to see how much your CPU/GPU can handle before struggling to calculate a certain scene.
     
  28. jacob808

    jacob808 Notebook Deity

    Reputations:
    52
    Messages:
    1,002
    Likes Received:
    0
    Trophy Points:
    55
    well because, ati GPUs are "fixed function" while Nvidia's GPUs are "programmable" thereby tapping into Nvidia's GPUs potential as a more cooperative coprocessor giving the system a more effective parallel processing.

    Software developers, through the use of Cuda and Physx can unload programming calculations from the CPU and lighten it's burden, by transferring it over to the more powerful Nvidia GPU. This uses the same principle of supercomputers, and parallel processing.

    No longer do we measure a system's power by bits as in 16, 32, 64 bits, but how many cores. Intel and also Amd has implemented the principle of supercomputing and parallel processing into their high end CPUs by adding multiple cores (dual, quad, etc.). These cores are akin to many tiny CPUs running and cooperating in parallel, the same way supercomputers use many CPUs at once.

    By making the GPU programmable and able to essentially do what the CPU is suppose to be doing, it divides the job and takes stress of of the CPU, making the system run more efficient and get things done faster, similar to two workers cooperating on getting a job done, instead of just one worker bearing all the load and another slacking. Nvidia GTX 460 man! FTW!
     
  29. Phistachio

    Phistachio A. Scriabin

    Reputations:
    1,930
    Messages:
    2,588
    Likes Received:
    145
    Trophy Points:
    81
    You are kinda outdated :p

    By your logic, then ATi is also programmable. Look at AMD Stream (=CUDA) and AMD Eyespeed (=PhysX). When I installed AMD Stream, I actually stopped having drops in bullet collision, etc, and I got a HUGE increase in FPS. So apparently, there is a mutual share between GPU/CPU when I installed AMD Stream. Because there were times where there were Physics "moments", where the GPU had some sort of "spikes" in usage.

    So yeah, overall, the 5870M is still better than a GTX460M :)
     
  30. TomJG90

    TomJG90 Notebook Evangelist

    Reputations:
    46
    Messages:
    425
    Likes Received:
    0
    Trophy Points:
    30
    Pretty much the same thing here... :D
     
  31. jacob808

    jacob808 Notebook Deity

    Reputations:
    52
    Messages:
    1,002
    Likes Received:
    0
    Trophy Points:
    55
    according to this article from notebookreview.com

    PAX East 2011: Alienware's M17x Notebook Steals the Show

    Alienware's new M17x equipped with the new Amd 6870 (which is essentially a 5870) is considered their low end sku model as compared to their midrange sku model that comes equipped with the Nvidia 460m GTX, so that in itself speaks to where these GPUs rank.

    Infact if we compare the 3dm6 benchmarks between the 5870 and 460m on notebookcheck.com, the 5870 avg score of 12779 is beat out by the 460m's avg score of 13196.

    Finally, concerning Nvidia's Cuda and AMD's Stream, Nvidia's Cuda is more widely adopted by the development community than the newer less used AMD stream. As we all know it does not matter what version is superior, but which version has more support that actually counts.
     
  32. Mr_Mysterious

    Mr_Mysterious Like...duuuuuude

    Reputations:
    1,552
    Messages:
    2,383
    Likes Received:
    15
    Trophy Points:
    56
    Are you serious?????

    Mr. Mysterious
     
  33. xxERIKxx

    xxERIKxx Notebook Deity

    Reputations:
    159
    Messages:
    1,488
    Likes Received:
    0
    Trophy Points:
    55
    Um just because the m17x is cheaper with the 6870 compared to the 460m doesn't mean the 460m is better. In the M15x the GTX 260m was more expensive than the 5850 and was behind it in performance and lacked dx11.

    Who cares about 3dmark 06? and what CPU were they using? If you compare the 5870 and 460m in 3dmark11 the 5870 scores much higher on average.
     
  34. Harleyquin07

    Harleyquin07 エミヤ

    Reputations:
    603
    Messages:
    3,376
    Likes Received:
    78
    Trophy Points:
    116
    The Notebookreview main article also states that the top-end model for the announced Alienware is a 6970 with 2GB VRAM, although it does not seem like much as it's still comparable to the current top-end 485m from Nvidia.
     
  35. alexUW

    alexUW Notebook Virtuoso

    Reputations:
    1,524
    Messages:
    2,666
    Likes Received:
    2
    Trophy Points:
    56
    5870 vs. GTX 460m ...... Who cares ... ?

    Both are good cards!
     
  36. ryzeki

    ryzeki Super Moderator Super Moderator

    Reputations:
    6,552
    Messages:
    6,410
    Likes Received:
    4,088
    Trophy Points:
    431
    They offer the GTX460m as an "upgrade" more as a marketing stunt rather than it being better. That, and the fact that nvidia charges more for their gtx Both cards are on a similar level, but the HD5870m is faster overall, except in DX11 where both cards are rather slow in the end.

    3dmark06 scores are useless by themselves because of the CPU score generally adds too much. If you go by single SM 2 and SM3 scores, you can get a better picture. As for vantage, the GPU score also reflects the extra muscle of the HD5870/HD6870.

    It has been debated to death already.

    As for CUDA and Stream, CUDA is more widely adopted, but that does not mean Stream is useless or that it does not bring similar benefits. In the end, AMD's marketing and strategy with Stream are not equal to Nvidias popularity. That hardly matters when you are focusing in a gaming GPU, but if you heavily use CUDA apps etc, then of course an nvidia offering will be the best choice.
     
  37. Mr_Mysterious

    Mr_Mysterious Like...duuuuuude

    Reputations:
    1,552
    Messages:
    2,383
    Likes Received:
    15
    Trophy Points:
    56
    Can anyone explain how someone can think that a 5870m is MID-RANGE???

    Mr. Mysterious
     
  38. xxERIKxx

    xxERIKxx Notebook Deity

    Reputations:
    159
    Messages:
    1,488
    Likes Received:
    0
    Trophy Points:
    55
    Compared to desktop gpus the mobility 5870 is considered mid-range.
     
  39. Mr_Mysterious

    Mr_Mysterious Like...duuuuuude

    Reputations:
    1,552
    Messages:
    2,383
    Likes Received:
    15
    Trophy Points:
    56
    Oh well we're not in a desktop forum now, are we? :p

    Mr. Mysterious
     
  40. jacob808

    jacob808 Notebook Deity

    Reputations:
    52
    Messages:
    1,002
    Likes Received:
    0
    Trophy Points:
    55
     
  41. Mr_Mysterious

    Mr_Mysterious Like...duuuuuude

    Reputations:
    1,552
    Messages:
    2,383
    Likes Received:
    15
    Trophy Points:
    56
     
  42. oldstyle

    oldstyle Notebook Consultant

    Reputations:
    36
    Messages:
    100
    Likes Received:
    0
    Trophy Points:
    15
    When CUDA becomes relevant these cards will be has been. Talking console games and games that are coded for a specific platform/hardware has zero to do with PC games.

    Comments like ryzeki made are just ignorant.
     
  43. GapItLykAMaori

    GapItLykAMaori Notebook Evangelist

    Reputations:
    28
    Messages:
    385
    Likes Received:
    0
    Trophy Points:
    30
    I am dead serious, a 5770 level performance is not that extreme....
     
  44. ttnuagmada

    ttnuagmada Notebook Evangelist

    Reputations:
    50
    Messages:
    382
    Likes Received:
    0
    Trophy Points:
    30
    The R1 M17x's have had so many problems with Nvidia drivers that they opted to start selling them with 4870's, even though the mobo was Nvidia. People keep perpetuating the "ATI has ty drivers" lie even though it has been false for a decade.
     
  45. ryzeki

    ryzeki Super Moderator Super Moderator

    Reputations:
    6,552
    Messages:
    6,410
    Likes Received:
    4,088
    Trophy Points:
    431
    Seriously, would you mind telling me how would you SELL to a customer, a weaker more expensive card? Don't you think it would sit a bit odd for you to offer a stronger GPU for LESS money? Thats the problem boutiques have witn nvidia at the moment.

    An HD5870m being faster, and cheaper, does not help sell a 460m other than a forced "upgrade". It is not an upgrade at all for the little extra options with overall weaker performance.

    Then you must be very well aware that going propietary technology, when there are open standards, is the usual road to failure in the end right? Both Nvidia GPUs and ATI GPUs are capable of GPGPU functions. Stop talking as it if its an exclusive feature of Nvidia.

    The more developed GPGPU functions become, OpenCL among other standards will take over, and propietary technologies will hardly be used.

    As for the not so relevant Sega/Sony comment.... Sega failed vs sony not only because of price, but because Sega continously released upgrades and updates for their consoles, among extra consoles. Saturn was hardly in shape for the battle. Additionally, Sony welcomed developers with open hands, making it cheaper to develop for them, without the restrictive contracts akin Nintendo's back then.

    One of the main differences between your example and current situation is the fact that both companies have had GPGPU capabilities for a while. Infact, the very first company to develop programable shaders was ATi with its Xenos GPU for console, or their R520 core counterpart for PC. Since then, both companies have had unified programable shaders and became capable of performing different tasks.


    Mind telling me how MY comment was just ignorant?
     
  46. Yiddo

    Yiddo Believe, Achieve, Receive

    Reputations:
    1,086
    Messages:
    4,643
    Likes Received:
    1
    Trophy Points:
    105
    I always have been an Nvidia fanboy and easily get bought by the whole Physx thing that was upto when I had the chance of picking up my G73JH on the cheap with my first ATI card in it.

    I personally would have rather had the Nvidia because you always stick to what you know works for you. Also there is an argument of the temps against the power and that the power the 5870 throws out is next to none but comes at the price of the boiling point of water.

    When it comes down to it both have their pros and cons and end up cancelling each other out because they are both very very good cards, I will admit that having the JH has warmed me towards ATI but would I still rather have the GTX 460m knowing it is maybe less powerful - YES.

    Why? There is no logical reason only that it sounds cool, remember the Voodoo 3 3000? sounds bad a.s.s

    At the end of the day all of this makes no difference as long as its got GDDR5 stamped on the label you can do no wrong.
     
  47. ryzeki

    ryzeki Super Moderator Super Moderator

    Reputations:
    6,552
    Messages:
    6,410
    Likes Received:
    4,088
    Trophy Points:
    431
    Temps are wholly dependent on the laptop maker. My MSI cools off a heavily overclocked HD5870m even in 900/1035 clocks under 80C.

    As for memory, remember that memory type is not the only thing that matters... memory type, speed and bus width determine the memory bandwidth which can help or become the bottleneck at certain resolutions. Sadly, you can get something like 128bit GDDR5 and averga clock speeds, vs 192 bits, GDDR5 and even lower clockspeeds.

    I won't argue about names or anything like that, and it is completely true that one buys whatever works with them. No one is denying that the 460m is capable, that's not the discussion. Hell, I was actually expecting more out of nvidia when ATI released the HD5870, mainly because on paper (Higher specs than 280m GTX all around), the 460m alone should have been more than enough to handle ATi, but after researching and seeing it in action, guess I got disappointed.

    Hell, I have seen and experienced every single ATI vs nvidia war ever since the unexpected, overly powerful ATi Radeon 9700 pro. I don't have a particular preference as in I will never buy Nvidia or anything stupid like that, but I always go for the best bang for buck I can find. Hell my previous laptop was a gTX260m and I loved it.

    These competitions are good. Nvidias 8000 series was phenomenal, and it helped ATI create the HD4000/HD5000 series to compete, to finally bring a monster in the form of HD6970m. The only thing nvidia needs right now is proper price, because the GPU and the performance is there.
     
  48. Yiddo

    Yiddo Believe, Achieve, Receive

    Reputations:
    1,086
    Messages:
    4,643
    Likes Received:
    1
    Trophy Points:
    105
    I totally agree with this and its pretty clear because double the bus of slower RAM means nothing which is shown by the 5870's small bus of GDDR5 being a total powerhouse. Many people however do not look at the most important things like pixel/texture fill rate and bandwidth for instance the 3GB of Vram on the GT445 which can only handle ddr3 - madness for the buyer but good marketing because ddr3 is cheap like a budgie.

    I wish I had only been able to try out the GTX 8800 as I was unfortunetely bought by the 8700m gt's in SLI (marketing ploy) but even with all the hardship the 8700m took it was still a great card and did me well in SLI. The 6970m looks like it will destroy all although its about time mobility cards were hitting the 100GB/s bandwidth mark.
     
  49. mushishi

    mushishi Notebook Consultant

    Reputations:
    137
    Messages:
    238
    Likes Received:
    0
    Trophy Points:
    0
    GPGPU and OpenCL are fun and all but they are nowhere close to a CPU.

    The CPU has no worries from GPU ever becoming that popular. The GPU is limited in what they are popluarized to do... GPU can only calculate simple things, and that is their massive constraint. They can't even be used for graphic rendering properly because of their simplicity.

    Look at any article for rendering CPU vs GPU. The difference in quality and complexity of what a CPU can do is astounding. It makes you think how GPGPU ever became so popular.

    This also includes encoding. GPU reviews often review the performance of GPU encoding, for both Nvidia and AMD. But frankly if you want quality and the complexity of using filters etc, nothing compares to CPU. Even on the lowest settings for x264 will have higher quality than the highest setting for CUDA encoding.

    So for 460M and HD5870M I would not even consider GPGPU, Stream or CUDA as a factor in buying.

    UNLESS you are in research and you have tons upon tons of simple calculations that needed to be made for number crunching, CUDA/Stream/OpenCL really shouldn't be a factor in your buying choice for GPU. For Photoshop, rendering etc, a powerful Sandy Bridge will take care of that easily.

    Lastly CUDA will never be popular. CUDA will never be mainstream. It never became mainstream or popular and with OpenCL now here, CUDA never will. Nvidia would be smart to just drop it all together and focus all their attention in GPGPU with OpenCL instead. There are just too many AMD Graphics out there and slowly making their way into professional for it to ever became mainstream. Especially as is, for many applications the CPU is still the reigning king for final production, where quality is more important than speed.
     
  50. ryzeki

    ryzeki Super Moderator Super Moderator

    Reputations:
    6,552
    Messages:
    6,410
    Likes Received:
    4,088
    Trophy Points:
    431
    Amen.

    Now I must get back to work!
     
 Next page →