Is it worth 200$?
At this point I'm finding get gaming benchmarks for both because they are in general in different laptops and comparing different games. I know the benchmarks by benchmarking software aren't that different but I want to see true performance difference in games. A
Anyone got links or opinions to share?
-
-
The performance difference is only like 5-8%.
I would stick with the cheaper 485m. -
Meaker@Sager Company Representative
I know a 6970 with an SSD is even better value
-
6990m @ the same price as a 6970m is even better
-
I'd prefer 485 and save that $200 for something else. The performance improvement is not as big as from a 560m to 485m which costs $295 for some model of laptop and that's more economic IMO.
-
But what about heat? And what KW/h?
-
-
Karamazovmm Overthinking? Always!
-
-
Karamazovmm Overthinking? Always!
-
I own a X7200 with dual 485's. I am waiting til they release the 585m cards before I upgrade. If you are buying new you can either wait or for $200 more I would buy the better video cards. I look at it this way, you are paying oodles for the laptop anyways why would you skimp on video cards. For me its different because it would cost me $1600 to upgrade. A very good reason to wait lol. Also the performance between the 485 and the 480 cards were very pronounced when they came out. Just my thoughts.
-
There will be no 585M.
-
Meaker@Sager Company Representative
-
really i think that because nvidia isn't coming out with kepler for desktops until next year that there probably will be at least one more mobile graphics card coming out in the 5xx series and it will have to have more cuda cores.
-
Um.... no. The GTX 570 (480 cores) is the next chip in line, and there's no way that Nvidia can fit that 219W TDP into the mobile space.
It would be so severely downclocked, it'd be the 480M all over again. Maybe even worse. -
Meaker@Sager Company Representative
-
maybe a dual gpu solution, using something like a gtx 550 or something i dunno, but since those kepler cards wont be coming till late next year they will need to cook up something.
-
Even the X7200 wouldnt be able to handle that.
-
Meaker@Sager Company Representative
Problem with creating a dual chip solution, even IF you fit it onto an MXM card AND make it faster than the single chip, you get reducing returns, so when you put two of those together you are effectively running quad SLI vs the dual SLI higher end chip, that's only ever going to end badly for the quad setup.
All existing chips are in use already, so I don't even know which you would use anyway as the current 580m already has twice the power of the 560M and you would need beefier chips than that to get ahead. -
Maybe they'll refresh the GF100 (480m) and make something like a 590m, I can't really see Nvidia not releasing another mobile Fermi GPU before the release of Kepler.
I could bet that we'll see another Nvidia Fermi card before the end of the year. -
Meaker@Sager Company Representative
They are not going to release a GPU that performs worse, which that would.
That or we get MXM-C. -
I disagree. I imagine they will at least overclock it higher like they did for the 580m. Due to the competitive nature of the market they have to release new cards fairly often. I going to venture that they probably held back a little on the 580 so the 585 will be about a 20% increase in performance when they release it. That would give them time to release their new architecture.
-
actually they didn't just overclock a 485m they lowered the voltage from 1.00 to 0.87, which means if you could raise it back to 1.00 you would really have something, major overclockabilty, something like 30-40%. another forum member has his overclocked to 820mhz on the core, which the gtx 560ti is 822mhz, so basically its running at the clock rate of the desktop card.
-
Meaker@Sager Company Representative
Yes but the 560ti has quite a jump in power consumption too.
I doubt you want to be drawing 125W or more from the slot. -
he told me we was pulling out 224 watts when it was like that, and the power brick is only 180 watts.
-
Meaker@Sager Company Representative
which means the brick was maxed assuming its 80% efficient.
-
yeah pretty much, im asuming he was using the input wattage as opposed to the output.
-
Meaker@Sager Company Representative
If it was output his PSU would explode.
-
I know I am running two 300W power bricks with an adapter so they feed into my X7200.
-
Meaker@Sager Company Representative
Its very hard to get even power draw though.
-
These high-end mobile gpus are already powerful enough.
What's the point in raising voltages to gain higher OC and risk increasing the temperatures to dangerous areas?
I would sooner undervolt the gpu and OC by about 10 to 20% (which should be doable).
Lower temps, and still higher clocks. -
Meaker@Sager Company Representative
No way can these level of cards undervolt and OC that much.
Mobile cards are still a level of play behind desktops and at FHD resolutions and AA/AF it can make a difference. -
So why was I able to undervolt my 9600m GT GDDR3 (albeit a mid-range card) and OC it by about 20%?
I suspect high-end mobile gpu's would be able to do something similar.
As for the high-end mobile gpu's being on a level of play behind desktops... well, OC-ing them by 30% won't really make that big of a difference to begin with and you risk your laptop by overvolting them too much.
At 1920p. I highly doubt that one seriously needs AA that much. -
Meaker@Sager Company Representative
9600M GT is 4 generations old! From a completely different era.
Also I can really tell the difference!
Nvidia 580M vs. Nvidia 485M
Discussion in 'Gaming (Software and Graphics Cards)' started by hellhoundpro, Jul 10, 2011.