I'm currently thinking of getting a Sager NP9150, and i'm split between getting the AMD or the nVidia. The AMD has more power, but from what I've heard AMD has more problems. The nVidia has no problems and better drivers, but is just a re branded 580M. Which should I get? I will be doing some CAD work, so which one will be better suited for that? I know that this has been asked before, but the latest review for it i can find is mid may, and they just shipped recently, so if any one has any experience with them I would much appreciate the advice. Thanks for the help!
-
I believe all the new batch of 7970M's are now working correctly so you should be fine there.
I am unsure on CAD but for gaming I would definitely go 7970M over the 675M.
Hopefully my 7970M arrives soon and I can give you some feedback on the drivers. -
Dunno about CAD, but if you do any gaming the 7970m will blow the 580m away. I would go for the 7970m.
-
Karamazovmm Overthinking? Always!
if you are doing some cad work you should be fine.
-
I am pretty sure CAD takes advantage of CUDA but not OpenCL. You want to compare the compute performance of the two. I think the 675m is better suited for that. However, only go for it if you are going to be doing CAD and nothing else. If you are going to be gaming, at all, get the 7970m.
-
What they said, but why not also consider the 680m?
-
better choice gtx 680m special for gamming
-
Meaker@Sager Company Representative
Except the 680M sucks at compute.
-
I see, didn't realise the kepler compute benchmarks were that bad until I googled it, was always looking at the gaming benchmarks
In that case, ignore my suggestion. I do believe I've heard the quadro cards are the ones to look to for pure compute use? But they're really a huge notch pricier -
nvidia is trying to get their gaming GPUs not compete with their workstation GPUs so they reduced the compute performance of the Geforce series. They will probably come out with a newer Quadro series.
Edit: Even for gaming, unless you don't care paying $300 more, the 7970m will keep you good for a couple of years. -
that's a really weird marketing strategy though, especially since the 7970m is already as good or slightly better than the 680m and cheaper to boot. Why not something extra to tip people over the edge, at least they can get the engineers who love gaming
-
Still i will wait and i buy 680m
I LOVE NVIDIA
and i had a bad experience with ATI GPUs for me its more important not scores at 3d mark as smoothly gamming
-
The "something extra" will add transistors, power consumption, heat and cost for very little gaming performance increase. It's not like Nvidia could have had the compute performance for free and just decided not to take it. The reason that the gtx 680 is slightly more efficient than the hd 7970 for gaming is because Nvidia crippled the compute performance while AMD actually increased it on their side.
That said, the 680m should still be just as fast if not faster than the 675m for compute. It won't be nearly as efficient, but it'll keep up just through sheer power. -
You can game on the Quadro as well. Because the compute performance with CUDA of the geforce was so good with fermi, their sales of the quadro (which give them more profits) fell. That is why they WOULD WANT TO (I am not sure - not enough experience to be on a board to make these decisions
) make the move.
-
if desktop cards are an indication as well as personal experience, expect a 30% hit in some apps. in fp64 even more
I agree Fermi's CUDA performance was great but now to make it awkward now that many apps are getting rid if CUDA. Adobe gave up CUDA exclusivity for OpenCL in CS6, Solidworks, MAYA, 3DsMAX and others are transitioning away as well -
How is that picture relavent? The 680m is based off of the gtx 670 while the 580m is based off of the gtx 560ti. The 670 matches or even beats the gtx 570 (which, obviously, is faster than the 560ti) in most benchmarks.
-
it shows the performance variance of Fermi vs Kepler in some compute apps. OpenCL based specifically( not games, CUDA or nvidia optimized )
if 580 desktop is 30% faster than its Kepler replacement (gtx680) why can we expect anything different in a 580m/675m and a 680 ( demontrated numerous times in photoshop and Premier for example )
in the desktop area its been common knowledge that the new Kepler core cant compete in many GPGPU apps as well as its Fermi predecessors. with all the confusion buyers REALLY need to know what their software supports. -
SlickDude80 Notebook Prophet
kepler is very very weak in compute....Nvidia has pushed this off to Quadro. The kepler is a gaming card, and its not really meant for pro work
-
Did you even read what I wrote? Or look at the benchmarks? The 670m uses the core of a desktop card that's a tier above what the 580m is based off of.
Your reasoning is like saying "because the gtx580 has better compute than the gtx 680, the gtx 560 must also have better compute than the gtx 680."... Just doesn't make any sense. -
I actually laugh that people even consider a mid range gpu 675m which it is compared to a 7970m and the 675m is a power hungry card. I wouldn't be suprised if a 7970m on crossfire took only 30-50w more then a single 675m and 7970m on crossfire is over 3x more powerful then a 675m.
Funny thing is the gtx 680m may take less watts to run then a gtx670m let alone 675m. -
no im basing it entirely on the archetectures. Fermi > Kepler in OpenCl and some other GPGPU applications.
ok the 560m in My daughters ASUS is old tech. her boyfriends Alienware has a 660m
the 660 by the numbers should be faster. we tried this experiment for the forums at creativecow and the 560 performs renders in PS-6 roughly 40% faster than the newer card, it all comes to which cores and architecture you use
look in your own link, the second graph shows CL performance of 580 over 680
ok, another source now that fp64 is getting popular
http://vr-zone.com/articles/nvidia-kepler-gpus--back-to-games-away-from-compute-/15332.html -
If you read any articles about GPGPU, you'll see that Fermi is better at certain types of non-gaming tasks than Kepler. Of course, there's the relative performance difference between the cards themselves, it's likely that a 680m will outpace a 520m even in OpenCL, but my gut tells me that the GTX675m will win over the 680m for floating point and OpenCL. KCTech is right.
Whether you need the extra performance afforded by Fermi in things like OpenCL is a whole other story, most people likely don't. The 680m is still a very respectable card outside of gaming as well, it's just that fermi does some things better in GPGPU scenarios. 40% faster in tasks that would only take a few minutes won't really make a difference. If you spend hours rendering stuff, then the drop in gaming performance by staying with Fermi, might be worth it for the gains in other tasks. -
SlickDude80 Notebook Prophet
-
You just don't get it. Here's another anology. Sandy Bridge is a faster architecture than Nahalem. Yet a Nahalem i7-920xm will still be faster than a Sandy Bridge i3-2330m. How can that be? Think about it, and apply that reasoning to context of what this thread is about.
The 680m makes up for it's lack of compute efficiency through brute force, through sheer cuda core count. -
SlickDude80 Notebook Prophet
except that everyone now is moving away from cuda and going opencl...the latest was adobe cs6 -
as is AVID, Solidworks, MAYA, 3DsMAX, REVIT, MASSIVE and thats just what I know of and use. CUDA has been dying in the pro app market the last couple years as its non standard and nvidia only. in these apps OpenCL and FP64 is what we care about. as I said a few posts up:
we understand your analogies but for many pro apps its flipped. brute force isnt able to keep up due to other limiting factors ( the current gf core ) -
You don't get it either. Show me proof that the 680m is consistantly slower than the 675m for compute. There are no benchmarks yet? Then show me proof that gtx670 is consistantly slower than the 560ti for compute.
Or, you know, you could just look at the link I posted. -
SlickDude80 Notebook Prophet
bro, its irrelevant. Cuda is dead. To argue if the 680m or 675m is faster in compute is pointless. I understand what you are saying, but i said it "doesn't matter" -
And..? "cuda core" is just what Nvidia calls it's processing units. All it means is that the card is capable of cuda. Both fermi and kepler use cuda cores. Doesn't mean that they're restricted to cuda only. It's like how AMD uses the term "module" for their CPUs... Just a name, nothing more.
-
SlickDude80 Notebook Prophet
we are talking pro apps because the OP wanted to do Cad work. I don't think you understand the context of this thread
-
That actually makes sense
-
Meh, I just picked up on you saying "If OP wants to run professional apps, get quadro or get ATI...or get Fermi". If Kepler is out of the running why would he get Fermi?
Cuda may or may not be dead but my point is if OP is considering a 675m for it's compute performance then it's not wrong to consider the 680m as well. -
SlickDude80 Notebook Prophet
You're right, i shouldn't have typed that at all...
Let me fix it:
"If OP wants to run professional apps, get quadro or get ATI" -
ok lets look at that link you yourself posted for the LuxGPU performance we care about in pro apps
at the BOTTOM we have the gtx670 ( 7000 ), we go up a little more to the gtx680 ( 7800 ). now we go UP to the gtx470 @ 8300 then UP to the 570 @ 9800, then UP to the 580 @ 11400. ok then we go into AMD which just kills it anyways.
when a 470 and not even a 560ti beats the gtx670 theres no hopeAttached Files:
-
-
Yes it does, but given the performance data KCTech mentioned (GTX580 vs GTX680 desktop), i'm not sure the increased cuda core count will be enough. Only benchmarks will really tell, but i'd still say that in certain workload types the 675m has a shot at the 680m.
-
see annoying post above. the 470 beats the gtx680 and the 670 of course
-
Meaker@Sager Company Representative
Kepler gets 1/24th of its fp32 shading power for compute where as fermi gets 1/8th.
That would mean a kepler chip needs 6 times as many shaders (1/3rd per shader, half clock per shader) to match the compute of fermi.
Last time I checked the 680M did not have over 2000 shaders. -
Well the 680 was never meant to be a GPGPU performer anyways but pure gamer GPU.
Doesn`t look so promising
http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/17 -
Meaker@Sager Company Representative
Yes the 680M and 680 are both middle size nvidia chips.
The GK110 may only go into quadro and tesla cards as it is lol.
I'd hate to think of the clocks on a 7 billion transistor chip in a notebook too ^-^ -
Really? You cherry picked one benchmark and ignored all the ones where the gtx 670 performs favorably. The funny thing is, the gtx 470 actually beats the 560ti in Lux, so your last sentence is hilarious
.
And like Slickdude said, it doesn't matter. The 7970m is far superior in compute anyway. Dunno why you're trying so hard to make your argument seem valid. -
Meaker@Sager Company Representative
How is it cherry picking when the review writer states the 680 gets a good result in only one test!?
-
So what have we all learned so far?
-
I need a drink and another large NAS
-
Karamazovmm Overthinking? Always!
I buy you a drink, colorado demoseille? or do you want the indica? -
Colorado, but im much more of a Crown and 7 girl
-
Karamazovmm Overthinking? Always!
will surely add that to my must drink beers -
its not beer. Crown Royal rye whiskey and 7-up on the rocks
-
Karamazovmm Overthinking? Always!
my google skills are terrible. is it single or mix? -
^ So are your flirting skills
. JK JK. Anyways, would OP be better off with the 675m or the 7970m or the 680m hmmm?
-
For basic autocad CAD, makes no difference at all your biggest limitation are the first 2 cores of your CPU. for more advanced stuff such solidworks or other big cad apps either the 7970 or 675, but ideally 7970 or a quadro, gforce CAD drivers are nightmare latley.
Update: AMD 7970M vs nVidia 675M
Discussion in 'Hardware Components and Aftermarket Upgrades' started by monkhm, Jun 13, 2012.
