Im fairly certain that ambient occlusion and TxAA is an nvidia exclusive technology as is physX. I'll be glad to be proven wrong. Kinda sounds like we are going to be paying roughly $100 per technology, haha![]()
-
-
i wanted to see what effect it has on 3.0 and the other test would be 3dmark11...
since this is where it counts most. this is the reason why vantage was a good bench...you cant just readily fool it into thinking something different. turning on and off lod or tess on and off. the score is still the same. same as if you ran one core or 8 cores..the gpu score is still the same with no influence of tess off or more cores. -
-
Yes, and both HBAO and SSAO seem to work just fine on my ASUS laptop w/ ATI 6570M on BF3!
-
as i haven't experienced AO before and would like to see it for myself on my 7970M
Can anyone else confirm if AMD users receive TxAA? -
Nvidia has the feature of driver enabled AO, which forces it in any designated game, regardless of whether said game has an AO option in its graphics option menu.
-
.
-
OT:
Can anyone link me to a page explaining MLAA and SSAA?
Or if someone knew the differences between these and the combination of FxAA + MsAA i.e TxAA
All i get is Master Locksmith Association Australia and Sporting Shooter's Association Australia -
-
Reading this thread from the beginning is like watching groundhog day.
I think Ive read approx 50 pages of the same people reiterating the same speculation and self-praise that they've already given us 10 times in this thread already
I hope the 680M is awesome. So I can regret my 7970M purchase, but have something stunning to upgrade to.
Im getting tired of having to read 10+ pages every day looking for real info only to read the same posts as the day before, reiterated in slightly different words by the same people. And Im sure Im not the only one
Hats off to those of you who put in the time to give us real numbers as soon as humanly possible. Slickdude, I like you.
-Tristan -
Yooo vuman how much did it cost to ship that sager to sydney im ther too
-
I didn't order from Sager, I ordered from LogicalBlueOne , I didn't want to risk any warranty issues, so i went with a local reseller, plus they're great with communication and are cheaper than the 3 other resellers in AUS.
Shipping was $29.99 (3-5 business days). -
With respect to benchmarks, especially games, there needs to be a more comprehensive system of testing and scoring. For example, static benchmarks like 3dmark 11/vantage etc should be weighed much less than video games (which they typically are). However, even among games, SP games should have much less weight than popular AAA MP titles which hold 100s of hours of replay.
For example, if a gamer finishes an 8 hr SP game, even if its an awesome DX 11 title, they likely won't go back to it or at the most once or twice unless it has a huge modding community behind it. So if nVidia 680M gets 55 fps @ 1080p with ultra settings and AMD 7970M gets 70 fps @1080p, the fact that it is a SP game should weigh into giving it a score on a performance scale.
On the other hand, if a game like BF3 with settings on ultra @ 1080p (which even stresses high end desktop cards) gets 60 fps with 680M on a heavy MP level (multiple runs of FRAPs should be run for accuracy) vs 45 fps on 7970M, then there should be more weight placed on a benchmark like this since it has a large community of players replaying the same title daily vs a SP game. This can extend to other MP titles as well but this is just one example.
So before arguing about benchmarks take this into consideration and then decide which brand really provides the best performance and value for gaming. Of course there is always feature support, 3rd party development support and drivers that also make a huge impact. Most people here just simplify it to overall benchmark % + cost and that is quite misleading. -
-
-
-
Since it's been happening often in this thread lately, i'll leave a general warning: Do not bypass the language filter, it's there for a reason and it's explicitly written not to do it in the forum rules.
Now back to the speculation until we have solid facts.Hoping to see some 680m thorough testing soon as i'll soon be in the market for a new notebook.
EDIT: since i saw someone ask the question, if you can't delete your own double post for some reason, report it and a mod will gladly remove it for you. -
-
im going to have to disagree... since not a lot of people actually running 3dmark11 extreme settings. (1080P) everyone is basing it off of the 720P.
you get 60+ or higher in all 4 test in that version of the benchmark...then your golden...dont see that happening with this gen mobile...maybe next gen mobile...and that's a very big maybe. and that bench is taxing your cards far more than any game is at this current time. -
Indeed. 3DM11 X score is true test of cards power. Its not a full power stress test, but it shows how capable your card is in overall.
-
-
There will always be differences in optimisation between different applications, but you still get a reasonably accurate idea of raw potential of a graphics card from a benchmark result. -
Kingpinzero ROUND ONE,FIGHT! You Win!
^ I agree when you say that you get a reasonable performance snapshot of the card, but that should be valid between cards of the same brand.
When comparing these scores with a card of the same segment but from another manufacturer people should consider that there are a few factors that can determine a difference between them.
Speaking of points, huge difference in terms of 1000 points should be takes as a "huge performance gap" but when you're in 200-300 range it's not even worth having an argument.
What peoples use these benchmarks for is only to brag or flame - there are, thanks god, some exception in the form of users who like to do so in order to help into having an idea.
Just to further explain my point, I've battled a few times some HD6990 that supposedly should concur with gtx580m (I own the 485m).
I was able to stay ahead of them in some benchmarks but I was a bit behind (200ish) in some others;
Bottom line is that both cards in game tests and benchmarks differ for maybe a 1-2fps, or they don't differ at all.
Same happened with 6970 vs 6990 iirc.
Fact is that in real life situations you can't be sure that the faboulus 10% (or more as people say) advantage in benchmarks actually does something ground breaking.
In my book as long as it stays above 55fps is fine. Also AMD needs to release an official driver as yet for the hd7970m so as I said there's much room for improvement. Nvidia had their drivers quite mature due to desktop kepler cards.
I just want to take everything into a context guys, shooting percentages and numbers here and there doesn't help at all, it can confuse people more than helping them.
Ok so BF3 does 60fps ultra/1080p on 680m?
Who cares. I'll do Very High with Fxaa/Af/and 2xMsaa with over 75fps and call it a day.
That's just an example but that's how it should really work. -
any news about samsung with 7970?
-
PERFORMANCE is not a very good indicator on graphic cards that is going to be used on 1080p with max settings. There is a reason why most reviewer sites nowadays test extreme settings with 3DMark11/Vantage and use extreme tessellation with heaven benchmark. Notebookcheck is a joke who test the GPUs with 1028x1024 with heaven, and test the games at either ridicilous settings like 720p with no extra goodies and then suddenly goes up to 1080p with every settings on. No middle ground to make the tests more interesting...
A simple example is Crysis 2. At 1024x768 which is about the same resolution as 3DMark 11 test, 580M score 121FPS while 7970M score 102FPS. But at 1080p 580M score 35FPS while 7970M score 55FPS...
You won`t hit the GPU (speaking 680M/7970M) with the typical load at 720p. You won`t load the memory and memory bus with enough to see if it will suffer. Whats going to be interesting is to see how 680M/7970M perform at Extreme settings, but most importantly in games. Once you cramp up to 1080p, 16AF and 4/8MSAA, you will see the difference in hardware reflect on the FPS. -
SlickDude80 Notebook Prophet
From everything we have seen, if you can afford the +$300, and its gonna let you sleep at night, then get the 680m. I can see that you are already leaning that way. Your laptop will be delayed till early to mid July.
As of right now, I'm seeing +/- 5% between the cards and we have confirmed through many owners how well the 7970m overclocks. We don't know how well the 680m overclocks yet, and it bugs the crap out of me that they put 900mhz vram on these cards for 3.6ghz effective
Would the $300 be better spent on a SSD and some ice cream? -
We should parrot what the tech sites do, and come up with our own suite of standardized testing for GPUs.
I'm willing to help brainstorm. -
oh ye i just watched a vid from sweclockers and and they have been told by nvidia that turbo boost function have been take of the 680m and it is working with a set speed
Nvidia demonstrerar Geforce GTX 680M, surfplattor med Windows 8 och Tegra - Computex 2012 - SweClockers.com
here is a link to the segment -
p.s you say 7970m is +/-5% the performance of 680m. Is that with 7970m @ stock clocks or OCed? -
Meaker@Sager Company Representative
It means that with AA and at higher resolutions the performance would start dropping off more rapidly.
-
p.s I would really appreciate any recommendation for me on whether to choose 7970m or 680m.
Thx in advance. Much appreciated -
SlickDude80 Notebook Prophet
I see this as another 580m vs 6990m situation that we saw last year. When both cards came out, all the PR from both companies was mind boggling. Each company was claiming they had the best and provided "benchmarks" to prove it.
The bottom line, the 580m proved to be slightly faster around 5%, but some games like metro 2033 and Crysis 2 favored the 6990m. Even now, someone on the m17x forum was blown away getting 50+ in the Witcher 2 with their 7970m, so I don't trust PR BS.
We are going to see the same situation here with 680m vs 7970m. We have already seen that both the 680m and 7970m score similarly in the synthetic benches (with the 7970m having a small lead). Now, this is a very good indication that they will score similarly in games with some games favoring nvidia and some games Amd.
And just because one bench shows a card getting better frames in one game, it doesn't reflect on the performance as a whole. So if the 7970m is getting faster frames in Crysis 2, it doesn't make it overall faster than the 680m and vice versa.
And why I'm irked that they have 900mhz vram on the 680m cards is because overclocking is what is going to make or break this for me. Nvidia put such low bandwidth on these cards on purpose (900mhz, 3.6mhz effective, 115 GB/s). It could be due to heat, or power, but who knows. If we are going to start overclocking the 680m, we may find that the bandwidth or lack there of will hold the card back. It may not be an issue at stock clocks, but we will need more bandwidth as we push the core up. And nvidia isn't going to put highly rated vram on these cards just to downclock them to 900mhz. it doesn't make financial sense to do that.
I can go to 1ghz/1.5ghz on my 7970m on stock voltage. I saw yesterday someone hitting 1060mhz/1.6ghz @ 1.1v to score over 7.1K in 3dmark11 on a 7970m without an XM CPU (probably over 8K without tess). Temps are great. So if the 680m won't overclock and I can already see the bandwidth limitation, this card is not for me....especially at a $300 premium
EDIT: This is just my opinion and isn't intended to sway a person trying to decide to buy either card. When i saw the initial nvidia PR slides and saw 15-30% increase over 7970m, I was really happy and it was time to sell my 7970m and get a 680m. But realistically, cards that score identically in synthetic benches, aren't going to magically give you +30% game performance...so that's why I said +/- 5%. So, upon further logical thought, it doesn't make sense at this moment. If this card overclocks, then it may sway me back. As it stands right now, both cards are a wash...you are paying $100 for Cuda, $100 for physX and $100 3d...all of which I will never ever use. -
1) The 7970M doesn't get too hot at stock, from what people have been saying. But, what about when you overclock it like that? At stock voltage anyway, what kinds of temps are you getting?
2) I only read one thread but people were saying that the 675M didn't overclock much because it's already an overclocked 580M and runs really hot. Is this usually the way things are with high end Nvidia cards? If so, that would really suck, I like the idea of a slightly faster card (and CUDA could be useful for some applications), but not when it costs $300 more and runs at 80C+ all the time... -
SlickDude80 Notebook Prophet
So dropping to a more reasonable 950/1400, my temps don't crack 70c
The 675m is a 580m. It isn't faster or slower, they are the identical card. All current 580m's can be bios flashed to 675m -
Meaker@Sager Company Representative
2) What thread was that? Don't listen to any of them.
675M = 580M (same clocks, same ocing potential of around 25%-30%). Temps are somewhat high but you are at that point running a GTX560ti in a notebook (same clocks).
675M and 680M are totally different though, so you can't draw conclusions on one based on the other. -
-
-
Can someone tell me what kind of min/avg fps are you getting in BF3 ultra with the 7970m please? both stock and OC.. single player and multi.
-
Meaker@Sager Company Representative
People can expect all the temperatures they like. Until the chip is properly tested and results released we can't really know.
-
this has been posted like a hundred times by slickdude and others..... afaik it was always 40+ minimum (stock clocks) -
There is huge difference in different resolutions as well in between maps and MP vs SP.
Even HD7970 and GTX680 falls below 40 sometimes. In 64MP Caspian Border when huge action is going on. I have tested BF3 with various cards and maps in Ultra preset 1920x1200. For example Metro map is a joke compared to CB.
To keep FPS above 60 in any situation, its best to use High preset and perhaps some easyer AA. -
-
SlickDude80 Notebook Prophet
with a few simple tweaks in BF3, you can be flying at 65+ fps. i think it is more important to know that the game is playable at Ultra...then tweak a little. Use SMAA or FXAA or bring down the msaa to 2x. it really makes a diff. Also a slight bump in clock speeds really help
-
BTW Slick, 7.1k tess ONCRAZY!!
-
SlickDude80 Notebook Prophet
I can't say I'm an nvidia fan or Amd fan, i get the best performance and do what makes sense to do
yup...7.1K in 3dmark11 tesselation on. Single 7970m OC'ed to 1060mhz/1.6ghz...and he did it with a non XM CPU which is impressive -
-
^^ looking from these benches, it is pretty impossible for 680m to get +60 fps in BF3 with the same settings considering it is some 30% downclocked... yeah, nvidia certainly bumped up some numbers
-
Mmmm....if there's one question that I have about the GTX 680M, I see that it's in 17 inch notebooks primarily, but why put it in the Clevo P150EM and not the MSI GT60? I was always under the impression that MSI's cooling was just as good as Sager/Clevo, occassionally better at times. I was looking forward to a GT60 sometime around Black Friday/Christmas, but if it won't get the GTX680M, I'm not sure what I'll do. I don't believe I'll have the money to buy it+a seperate GTX680M, and then potentially void my warranty by installing it
AMD 7970m vs GTX 680m
Discussion in 'Gaming (Software and Graphics Cards)' started by x32993x, Apr 20, 2012.