I see a lot of people choosing between those 2 in "what should i buy" and many actually choose 460m. Why? Is there some stability problem with 5870? Because the performance is around 10-25% higher.
Lets say Metro 2033 wont even run on "high" smoothly with 460m. It will with 5870. I am close to saying that 5870 is the GPU king for its price atm. It can run EVERY game on high and its fairly cheap. Am i missing something?
Mafia 2 Ultra settings:
460M - ~33 FPS (33/32.9)
5870M - 41.1 FPS
+19.71% to the 5870M
Starcraft 2 Ultra settings:
460M - ~34FPS (31.6/36)
5870M - 37.2 FPS
+8.6% to the 5870M
Metro 2033 Ultra settings:
460M - ~9FPS (9/9.6)
5870M - 10.6FPS
+15% to the 5870M
*neither are playable
Metro 2033 High settings:
460M - ~27FPS
5870M - 33FPS
+20% to the 5870M
CoD MW2 Ultra settings:
460M - ~44 FPS (41.3/47)
5870M - ~50 FPS (48.7/50.7/51.1)
+12% to the 5870M
Battlefield: Bad Company 2 Ultra settings:
460M ~30 FPS (29/30)
5870M - ~32 FPS (30.9/32.7/33.1)
+6.25% to the 5870M
Anno 1404 Ultra settings:
460M - 44 FPS
5870M - 81 FPS
+45.8% to 5870M
SIMS 3 High:
460M - 61 FPS
5870M - 81 FPS
+ 24.7% to the 5870M
Crysis GPU Benchmark Ultra:
460M - ~13FPS (12.9/13)
5870M - 17.1FPS
+23.97% to 5870M
DIRT 2 Ultra Settings:
460M - ~42FPS (41.6/42.9)
5870M - ~31 FPS (29.6/30.9/33.8)
+26% to the 460M
Based on http://www.notebookcheck.net/Computer-Games-on-Laptop-Graphic-Cards.13849.0.html benchmarks
-
1. Cuda
2. PhysX
3. Laptop no 5870MR option
4. Less problems (ATI cards often have certain problems with new games and only fixed in next driver update)
5. Better picture quality (I can see a very big difference between ATI and Nvidia cards, especially for lighting, light spots rendered by nvidia cards look much more natural and smoother). -
-
-
4. I actually had a couple of BSODs on my other nvidia laptop, no BSODs in my ATi laptop
5. Nope, comproved, and shown that ATi has a better picture quality. I have compared my other laptop with nvidia, with my ATi one, ATi wins for miles. -
I've always found ATi/AMD has worse video drivers, that can be a pain to update. I've never received a BSOD from ATi/AMD or Nvidia drivers.
5. Where has it been proven that either makes games look better? -
The 460M vs Mobility 5870 horse has been beaten to death, several times over. Why restart the debate, when both chips are already phased out?
-
-
first of all, what is good picture quality?
better contrast?
better saturation?
better LCD panel?
brighter?
no artificial color boost?
etc...
From my experience about this subject from last 10 years, ATI tends to provide better contrast and saturation, makes the picture punchier at default setting than NVIDIA.
After you calibrate both card, nobody can tell a difference.
I really with this topic about picture quality ATI vs. NVIDIA would die. -
) and yes cards are still really common.
-
masterchef341 The guy from The Notebook
I'm not english, but that's bollocks. Both are fine. PS, you need a source to make a claim like this, and an nVidia blog post is not sufficient.
-
GapItLykAMaori Notebook Evangelist
Im pretty sure the percentage increases are wrong. i.e 61 fps 460m 81fps 5870 should be 32.7% not 24.6%
-
Does CUDA and PhysX really do much? I mean sure there was mirrors edge but since then I don't think I've seen a shining example to buy into PhysX, its not like BF3 or Crysis 2 are supporting it heavy, or even at all!
-
since physics runs on the cpu 99% of the time it's absolutely not relevant
-
-
jenesuispasbavard Notebook Evangelist
I have a GTX 560 Ti in my desktop and a Radeon HD 6650M in my notebook and I see absolutely no difference in visual quality on the _same_ monitor.
And yes, until OpenCL comes around, CUDA will always be my preference for things like SETI@Home. -
-
HD5870m is a better card, but that does not stop the 460m from being a great alternative.
-
Correct me if I'm wrong, but my understanding from reading alot of articles and benchmarks on Nvidia, and ATI cards is that even though the ATI cards reach a higher maximum frames per second than the Nvidia counterpart, the Nvidia cards sustain a more consitent average framerate and Nvidia's cards has a higher minimum framerate than the same competing card from ATI. If this is true than Nvidia's cards have the power where it's needed, because in the end it doesn't matter if you can hit 100 frames per second, but still keep dropping below 20 fps on a frequent basis.
-
darth voldemort Notebook Evangelist
-
Minimum fps is usually impacted by the CPU the most in my opinion.
-
GapItLykAMaori Notebook Evangelist
-
darth voldemort Notebook Evangelist
-
darth voldemort Notebook Evangelist
-
I have both Nvidia and ATI top of the line...I love them both. But my ATI is newer, therefore I conclude, I love ATI more. Aww...Just kidding I love those 2. -
I own a laptop with a 5870M so by that fact alone it is FAR superior to the 460M.
-
-
Software developers, through the use of Cuda and Physx can unload programming calculations from the CPU and lighten it's burden, by transferring it over to the more powerful Nvidia GPU. This uses the same principle of supercomputers, and parallel processing.
No longer do we measure a system's power by bits as in 16, 32, 64 bits, but how many cores. Intel and also Amd has implemented the principle of supercomputing and parallel processing into their high end CPUs by adding multiple cores (dual, quad, etc.). These cores are akin to many tiny CPUs running and cooperating in parallel, the same way supercomputers use many CPUs at once.
By making the GPU programmable and able to essentially do what the CPU is suppose to be doing, it divides the job and takes stress of of the CPU, making the system run more efficient and get things done faster, similar to two workers cooperating on getting a job done, instead of just one worker bearing all the load and another slacking. Nvidia GTX 460 man! FTW! -
By your logic, then ATi is also programmable. Look at AMD Stream (=CUDA) and AMD Eyespeed (=PhysX). When I installed AMD Stream, I actually stopped having drops in bullet collision, etc, and I got a HUGE increase in FPS. So apparently, there is a mutual share between GPU/CPU when I installed AMD Stream. Because there were times where there were Physics "moments", where the GPU had some sort of "spikes" in usage.
So yeah, overall, the 5870M is still better than a GTX460M -
-
according to this article from notebookreview.com
PAX East 2011: Alienware's M17x Notebook Steals the Show
Alienware's new M17x equipped with the new Amd 6870 (which is essentially a 5870) is considered their low end sku model as compared to their midrange sku model that comes equipped with the Nvidia 460m GTX, so that in itself speaks to where these GPUs rank.
Infact if we compare the 3dm6 benchmarks between the 5870 and 460m on notebookcheck.com, the 5870 avg score of 12779 is beat out by the 460m's avg score of 13196.
Finally, concerning Nvidia's Cuda and AMD's Stream, Nvidia's Cuda is more widely adopted by the development community than the newer less used AMD stream. As we all know it does not matter what version is superior, but which version has more support that actually counts. -
Mr_Mysterious Like...duuuuuude
Mr. Mysterious -
Who cares about 3dmark 06? and what CPU were they using? If you compare the 5870 and 460m in 3dmark11 the 5870 scores much higher on average. -
The Notebookreview main article also states that the top-end model for the announced Alienware is a 6970 with 2GB VRAM, although it does not seem like much as it's still comparable to the current top-end 485m from Nvidia.
-
5870 vs. GTX 460m ...... Who cares ... ?
Both are good cards! -
3dmark06 scores are useless by themselves because of the CPU score generally adds too much. If you go by single SM 2 and SM3 scores, you can get a better picture. As for vantage, the GPU score also reflects the extra muscle of the HD5870/HD6870.
It has been debated to death already.
As for CUDA and Stream, CUDA is more widely adopted, but that does not mean Stream is useless or that it does not bring similar benefits. In the end, AMD's marketing and strategy with Stream are not equal to Nvidias popularity. That hardly matters when you are focusing in a gaming GPU, but if you heavily use CUDA apps etc, then of course an nvidia offering will be the best choice. -
Mr_Mysterious Like...duuuuuude
Can anyone explain how someone can think that a 5870m is MID-RANGE???
Mr. Mysterious -
Compared to desktop gpus the mobility 5870 is considered mid-range.
-
Mr_Mysterious Like...duuuuuude
Oh well we're not in a desktop forum now, are we?
Mr. Mysterious -
Mr_Mysterious Like...duuuuuude
jacob808 said: ↑ryzeki said: ↑and what's the opposite of upgrade? need I say more?
On the contrary, CUDA means alot toward enhanced gaming performance in the future, when the technology gets more "mature" and developers learn how to take advantage of it by implementing it more into gaming software.
Having the GPU computing more complex tasks that would traditionally be done on the CPU giving the CPU more headroom for other tasks such as AI and tapping into the GPUs unused potential would theoretically give us huge advances when the whole system is firing on all pistons.
Ironically an example of this was the console hardware war between, Sega's Saturn and Sony's Playstation. The Saturns advanced architecture favored the principle of parallel processing by using 3 powerful CPUs as opposed to Playstations 1 giant CPU working in conjunction with it's GPU. The Playstation won the war not because it had the superior hardware, but because it had more support from the development community, but as the years went on the Saturn's exclusive videogame software had leaped frog software produced on the Playstation because of programming advances made by Sega's inhouse development teams, that could not be duplicated on the Playstation hardware. It's very similar to CUDA which has the potential in transforming the Nvidia GPU into another monster CPU to compliment the real CPU, espicially if the CPU is a quadcore, effectively making it a supercomputer with 5 CPUs working in parallel.Click to expand...
Mr. MysteriousClick to expand... -
When CUDA becomes relevant these cards will be has been. Talking console games and games that are coded for a specific platform/hardware has zero to do with PC games.
Comments like ryzeki made are just ignorant. -
GapItLykAMaori Notebook Evangelist
mrmysterious66 said: ↑Are you serious?????
Mr. MysteriousClick to expand... -
The R1 M17x's have had so many problems with Nvidia drivers that they opted to start selling them with 4870's, even though the mobo was Nvidia. People keep perpetuating the "ATI has ty drivers" lie even though it has been false for a decade.
-
jacob808 said: ↑and what's the opposite of upgrade? need I say more?Click to expand...
An HD5870m being faster, and cheaper, does not help sell a 460m other than a forced "upgrade". It is not an upgrade at all for the little extra options with overall weaker performance.
jacob808 said: ↑On the contrary, CUDA means alot toward enhanced gaming performance in the future, when the technology gets more "mature" and developers learn how to take advantage of it by implementing it more into gaming software.
Having the GPU computing more complex tasks that would traditionally be done on the CPU giving the CPU more headroom for other tasks such as AI and tapping into the GPUs unused potential would theoretically give us huge advances when the whole system is firing on all pistons.
Ironically an example of this was the console hardware war between, Sega's Saturn and Sony's Playstation. The Saturns advanced architecture favored the principle of parallel processing by using 3 powerful CPUs as opposed to Playstations 1 giant CPU working in conjunction with it's GPU. The Playstation won the war not because it had the superior hardware, but because it had more support from the development community, but as the years went on the Saturn's exclusive videogame software had leaped frog software produced on the Playstation because of programming advances made by Sega's inhouse development teams, that could not be duplicated on the Playstation hardware. It's very similar to CUDA which has the potential in transforming the Nvidia GPU into another monster CPU to compliment the real CPU, espicially if the CPU is a quadcore, effectively making it a supercomputer with 5 CPUs working in parallel.Click to expand...
The more developed GPGPU functions become, OpenCL among other standards will take over, and propietary technologies will hardly be used.
As for the not so relevant Sega/Sony comment.... Sega failed vs sony not only because of price, but because Sega continously released upgrades and updates for their consoles, among extra consoles. Saturn was hardly in shape for the battle. Additionally, Sony welcomed developers with open hands, making it cheaper to develop for them, without the restrictive contracts akin Nintendo's back then.
One of the main differences between your example and current situation is the fact that both companies have had GPGPU capabilities for a while. Infact, the very first company to develop programable shaders was ATi with its Xenos GPU for console, or their R520 core counterpart for PC. Since then, both companies have had unified programable shaders and became capable of performing different tasks.
oldstyle said: ↑When CUDA becomes relevant these cards will be has been. Talking console games and games that are coded for a specific platform/hardware has zero to do with PC games.
Comments like ryzeki made are just ignorant.Click to expand... -
I always have been an Nvidia fanboy and easily get bought by the whole Physx thing that was upto when I had the chance of picking up my G73JH on the cheap with my first ATI card in it.
I personally would have rather had the Nvidia because you always stick to what you know works for you. Also there is an argument of the temps against the power and that the power the 5870 throws out is next to none but comes at the price of the boiling point of water.
When it comes down to it both have their pros and cons and end up cancelling each other out because they are both very very good cards, I will admit that having the JH has warmed me towards ATI but would I still rather have the GTX 460m knowing it is maybe less powerful - YES.
Why? There is no logical reason only that it sounds cool, remember the Voodoo 3 3000? sounds bad a.s.s
At the end of the day all of this makes no difference as long as its got GDDR5 stamped on the label you can do no wrong. -
Dallers said: ↑I always have been an Nvidia fanboy and easily get bought by the whole Physx thing that was upto when I had the chance of picking up my G73JH on the cheap with my first ATI card in it.
I personally would have rather had the Nvidia because you always stick to what you know works for you. Also there is an argument of the temps against the power and that the power the 5870 throws out is next to none but comes at the price of the boiling point of water.
When it comes down to it both have their pros and cons and end up cancelling each other out because they are both very very good cards, I will admit that having the JH has warmed me towards ATI but would I still rather have the GTX 460m knowing it is maybe less powerful - YES.
Why? There is no logical reason only that it sounds cool, remember the Voodoo 3 3000? sounds bad a.s.s
At the end of the day all of this makes no difference as long as its got GDDR5 stamped on the label you can do no wrong.Click to expand...
As for memory, remember that memory type is not the only thing that matters... memory type, speed and bus width determine the memory bandwidth which can help or become the bottleneck at certain resolutions. Sadly, you can get something like 128bit GDDR5 and averga clock speeds, vs 192 bits, GDDR5 and even lower clockspeeds.
I won't argue about names or anything like that, and it is completely true that one buys whatever works with them. No one is denying that the 460m is capable, that's not the discussion. Hell, I was actually expecting more out of nvidia when ATI released the HD5870, mainly because on paper (Higher specs than 280m GTX all around), the 460m alone should have been more than enough to handle ATi, but after researching and seeing it in action, guess I got disappointed.
Hell, I have seen and experienced every single ATI vs nvidia war ever since the unexpected, overly powerful ATi Radeon 9700 pro. I don't have a particular preference as in I will never buy Nvidia or anything stupid like that, but I always go for the best bang for buck I can find. Hell my previous laptop was a gTX260m and I loved it.
These competitions are good. Nvidias 8000 series was phenomenal, and it helped ATI create the HD4000/HD5000 series to compete, to finally bring a monster in the form of HD6970m. The only thing nvidia needs right now is proper price, because the GPU and the performance is there. -
ryzeki said: ↑Temps are wholly dependent on the laptop maker. My MSI cools off a heavily overclocked HD5870m even in 900/1035 clocks under 80C.
As for memory, remember that memory type is not the only thing that matters... memory type, speed and bus width determine the memory bandwidth which can help or become the bottleneck at certain resolutions. Sadly, you can get something like 128bit GDDR5 and averga clock speeds, vs 192 bits, GDDR5 and even lower clockspeeds.Click to expand...
These competitions are good. Nvidias 8000 series was phenomenal, and it helped ATI create the HD4000/HD5000 series to compete, to finally bring a monster in the form of HD6970m. The only thing nvidia needs right now is proper price, because the GPU and the performance is there.Click to expand... -
GPGPU and OpenCL are fun and all but they are nowhere close to a CPU.
The CPU has no worries from GPU ever becoming that popular. The GPU is limited in what they are popluarized to do... GPU can only calculate simple things, and that is their massive constraint. They can't even be used for graphic rendering properly because of their simplicity.
Look at any article for rendering CPU vs GPU. The difference in quality and complexity of what a CPU can do is astounding. It makes you think how GPGPU ever became so popular.
This also includes encoding. GPU reviews often review the performance of GPU encoding, for both Nvidia and AMD. But frankly if you want quality and the complexity of using filters etc, nothing compares to CPU. Even on the lowest settings for x264 will have higher quality than the highest setting for CUDA encoding.
So for 460M and HD5870M I would not even consider GPGPU, Stream or CUDA as a factor in buying.
UNLESS you are in research and you have tons upon tons of simple calculations that needed to be made for number crunching, CUDA/Stream/OpenCL really shouldn't be a factor in your buying choice for GPU. For Photoshop, rendering etc, a powerful Sandy Bridge will take care of that easily.
Lastly CUDA will never be popular. CUDA will never be mainstream. It never became mainstream or popular and with OpenCL now here, CUDA never will. Nvidia would be smart to just drop it all together and focus all their attention in GPGPU with OpenCL instead. There are just too many AMD Graphics out there and slowly making their way into professional for it to ever became mainstream. Especially as is, for many applications the CPU is still the reigning king for final production, where quality is more important than speed. -
mushishi said: ↑GPGPU and OpenCL are fun and all but they are nowhere close to a CPU.
The CPU has no worries from GPU ever becoming that popular. The GPU is limited in what they are popluarized to do... GPU can only calculate simple things, and that is their massive constraint. They can't even be used for graphic rendering properly because of their simplicity.
Look at any article for rendering CPU vs GPU. The difference in quality and complexity of what a CPU can do is astounding. It makes you think how GPGPU ever became so popular.
This also includes encoding. GPU reviews often review the performance of GPU encoding, for both Nvidia and AMD. But frankly if you want quality and the complexity of using filters etc, nothing compares to CPU. Even on the lowest settings for x264 will have higher quality than the highest setting for CUDA encoding.
So for 460M and HD5870M I would not even consider GPGPU, Stream or CUDA as a factor in buying.
UNLESS you are in research and you have tons upon tons of simple calculations that needed to be made for number crunching, CUDA/Stream/OpenCL really shouldn't be a factor in your buying choice for GPU. For Photoshop, rendering etc, a powerful Sandy Bridge will take care of that easily.
Lastly CUDA will never be popular. CUDA will never be mainstream. It never became mainstream or popular and with OpenCL now here, CUDA never will. Nvidia would be smart to just drop it all together and focus all their attention in GPGPU with OpenCL instead. There are just too many AMD Graphics out there and slowly making their way into professional for it to ever became mainstream. Especially as is, for many applications the CPU is still the reigning king for final production, where quality is more important than speed.Click to expand...
Now I must get back to work!
5870 vs 460m. Some data from notebookcheck.
Discussion in 'Gaming (Software and Graphics Cards)' started by Lieto, Mar 4, 2011.