KF2 is also UE3
-
[parsehtml]<iframe width='1280' height="720" src="https://www.youtube.com/embed/U7QHdh6iGXQ" frameborder='0' allowfullscreen></iframe>[/parsehtml]Last edited by a moderator: May 6, 2015D2 Ultima likes this. -
Sounds great to me! I manage ~120 with 1058/6000 for Toxxik, so SLI should love it. Maybe I'll force it to 125fps and OC my cards a bit; hope I don't CPU bottleneck.
-
Tested 350.12 driver earlier in 3DMark Fire Strike with 1072/1490. Got the same score as with 344.75 (P6360). Experienced no throttling with this Overclock on my gtx780m. It is strange that all your AW models with (1045/1500) and some other, experienced throttling in Fire Strike with this driver. This indicates that this driver works differently on different laptops, and even within the same brand.
Now I have also tested with 3Dmark11 benchmark test and Nvidia driver 350.12 as many have complained about problems with trottling during overclocking of Gtx780m. Run with 1058/1485(no trottling) on gtx780m and gets a score of P9456. Is there anyone who has comparable score with gtx780m (1059/1485) and other drivers?
Works in any case with my AW17R1
http://www.3dmark.com/3dm11/9745494Last edited: Apr 28, 2015 -
^ it is possible that it simply hates SLI... remember the 880M had huge problems in SLI that didn't show up in single-GPU? And the 980M seems to be ok in single GPU AW machines, much more so than in the SLI models.
-
Mmmm forced good SLI into Killing Floor 2. Good good. Some nice nVidia Inspector magic here.
Toxikk dealt with with some good old NCP magic.
I feel good tonight.Mr. Fox likes this. -
Here's an example video... [ LINK] -
@Mr. Fox Does this happen forcing power management to "prefer maximum performance" in global settings in NCP? And in windows power options, just in case, setting "PCI Express link state power management" when plugged in to "off"?
I can say without a shadow of a shadow of a doubt, that is something I've never even *SEEN* before on this computer, far less with this driver package or any program. -
[parsehtml]<iframe width='1280' height="720" src="https://www.youtube.com/embed/aPxSge4NMlQ" frameborder='0' allowfullscreen></iframe>[/parsehtml]Last edited by a moderator: May 6, 2015D2 Ultima likes this. -
Oh god no take it away *shudders at horrible throttle-monster*
-
Edit: And, yeah, this is very similar to how 980M SLI malfunctioned in my M18xR2, just not quite as severe as 980M SLI throttling was.
Strange (and most disturbing) that NVIDIA can screw up performance so badly like this with a simple driver update.Last edited: Apr 28, 2015D2 Ultima likes this. -
. Maybe the new direction from Nvidia... What will the next driver bring?
Mr. Fox likes this. -
ajc9988 likes this.
-
It is always good to complain in forums because bad publicity sometimes produces greater accountability for no reason except to avoid more bad press. But, it is also useful to poop on the carpet in the GeForce forums as much as possible and tell them how bad they suck in their own house. NVIDIA has a bad habit of only taking action to fix bad publicity and sweep the dirt under the rug, but they NEVER seem willing to admit their mistakes... they always seem to have a convenient excuse for their mistake and/or bad judgment.
Official NVIDIA 350.12 WHQL Game Ready Display Driver Feedback Thread (Released 4/13/15)
-
Eh same story on the desktop side with nVidia no longer optimizing for Kepler cards ( Source)
Now before everyone calls me a mad (tinfoil) hatter, I encourage you to look at TechPowerUp's very nice performance summary charts, and compare the 780/Titan against reference 970 from when Maxwell was first released, and then 4 months later.
Specifically, lay these charts side by side:
1920x1080 Performance
2560x1600/1440 Performance
From the 970 review (Sep 2014):
At 1080p: 780 = 88% 970; Titan = 95% 970
At 1600p: 780 = 90% 970; Titan = 98% 780
Now skip forward 4 months to Jan 2015 and look at the 960 review:
At 1080p: 780 = 85% 970; Titan = 92% 970
At 1440p: 780 = 85% 970; Titan = 92% 970
So in just 4 short months, both the 780 and Titan lost 3% relative performance against 970 at 1080p, while that gap increased to 5/6% at 1600/1440p.
To address the 800lb gorilla in the room: yes quite a few of the tested games changed and the 960 review added more games, but there still exists 10 common games between the 2 reviews, so a good sample size.
=====================================================================
Slightly OT (stop reading if AMD makes you go)
I'm amazed at how much the 7970 GHz Ed closed the gap with its big die Kepler (GK110) rivals in just 4 short months. Seriously if you do the same comparison as above except with Titan/780 instead of 970, you'll see:
Sep 2014 970 review:
At 1080p: 7970 GHz Ed = 76% Titan, 83% 780
At 1600p: 7970 GHz Ed = 78% Titan, 85% 780
Jan 2015 960 review:
At 1080p: 7970 GHz Ed = 82% Titan, 88% 780
At 1440p: 7970 GHz Ed = 85% Titan, 92% 780
So again in 4 months, 7970 GHz Ed closed the gap by 5/6% at 1080p, and a pretty good 7% at 1600/1440p.
If we compare against its original rival, the GTX 680, you'll see that it went from on par to just hair faster at 1080p, but at 1600/1440p the advantage increased from 12% to 18%.
Adding more insult to injury, if you compare against the reference 970 in those 4 months, the gains are much smaller, 3% at 1080p, and only 1.8% at 1600/1440p. Further evidence that nVidia simply abandons older hardware owners once it has released a new lineup.
Oh and before someone screams drivers:
- AMD only had 2 new drivers out between Sep 18 2014 and Jan 22 2015 - Catalyst 14.9 (Sep 29 2014), and Catalyst 14.12 (Dec 9 2014)
- nVidia had at least 6 new drivers out between the same time periodLast edited: Apr 29, 2015 -
I don't think you're mad (nuts) at all. Those are valid observations.
Although optimization is really nice to have, I'd be happy enough if they would hold their ground on performance and knock it off with breaking things. Just make drivers that work, stop trying to control what I can do with my own property, and stop trying to impress me with stupid feature gimmicks that don't make my beast run faster. That won't distract me from what's broken, even if it is shiny or has sparkles. It makes sense that they would focus on getting Maxwell right, since it is their latest and greatest. Why not just set the expectation up front and announce, "This driver is designed for Maxwell only, and might even make your Kepler run like crap... so don't use it." LOL. -
I guess the moral of the story is that you can never satisfy all gamers new and old.
Mr. Fox likes this. -
That's true. But if I were one of the guys with a new 970 or 980 GPU that malfunctioned with the newest drivers, I wouldn't be too thrilled about that either. 350.12 is all over the board. Works great for some, but not for others running the same GPU. Really strange... especially, since drivers used to be something NVIDIA was consistently good at. Maybe they're just tired, LOL.
-
I'm curious whether this is nVidia starting to become Intel, because they can afford to just not care anymore due to the lack of competition, or if there is something genuinely wrong about Maxwell. Not long ago I wrote a book ranting about Maxwell's fake efficiency which mainly comes from a very aggressive throttling algorithm, and can cause instability problems even at stock. Perhaps nVidia is reaping what they sowed for this fake efficiency...
I also think honesty has become a very rare commodity these days, especially when it comes to any for-profit organization. Hell just in the context of tech companies alone, I'm pretty sure if I dig deep enough, I can dig up horror stories on just about every company in the field.Last edited: Apr 29, 2015 -
I think part of the problem with inconsistent results is they are preoccupied with asinine "features" and trying too hard to be slick with a level of system integration that is totally inappropriate and unrealistic. What I mean by that is there are too many variables, dependencies and contingencies that should not even be a consideration for NVIDIA. They need to focus on making the stinking GPU... period. They should say, "OEMs, this is a 125W MXM module... it will draw 125W under normal use, and this GPU requires your motherboard have X in order for it to work right... Now, shut up and deal with it accordingly." Instead of "Well, this GPU can use 125W if there is a full moon, unless it's cloudy with a chance of rain, in which case it is going to draw 135W... but we're going to make it only use 100W if the battery charge is below 70% because that equals an extra 3.675% of battery run time" Or, "This GPU can sense how much output the PSU is rated for, and if the imbeciles that built it cut corners to save a few bucks, we can still make it work fairly well by blocking overclocking through the drivers, except for when the product of the relative humidity in Chicago divided by the ambient temperature in Warsaw on Monday is greater than the numeric value 20, in which case it will not overclock, but if it is less than 20 it will still overclock, but it will throttle... except for when it doesn't, in which case there will be a BSOD unless the CPU TDP is below 40% of TDP capacity."
I am being ridiculous to make a point. It used to be more like that "this is how it is, now shut up and deal with it" approach and I think that worked out a lot better. Fancy does not mean better, and sometimes it actually means worse. They need to build the GPU, make it awesome and let the chips fall where they may with the OEMs. A 970, 980, Titan, or any given MXM card should run the same on every machine with the same core components assuming that it isn't messed up or misconfigured by an incompetent OEM or end user. -
Robbo99999 Notebook Prophet
-
TBoneSan, Robbo99999 and Mr. Fox like this. -
The GPU should be a "dumb" component, as in system oblivious and agnostic. I would MUCH PREFER my system to turn off from lack of power if I exceed the capacity of the PSU than have to deal with erratic behavior. I can deal with functional limits by backing off... on my terms, not theirs. When I was trying to get 980M SLI to work right in my M18xR2 it was not possible to overclock to the point of tripping the breaker on the AC adapter or having it turn off like it does with 680M SLI or 780M SLI when I push the overclock too far. It should have been, but it throttled as soon as it hit a certain power consumption level. That level was eerily similar to the nominal rating of AC adapter (~325W) and with 350.12 drivers making 780M SLI exhibit behavior similar to what 980M SLI did, I'm ready to bust some skulls. Not because I care about a random messed up driver--which is forgivable and forgettable--but, because they may have taken a liberty that I feel should not be theirs to take. That makes me feel violated, and without any lubrication to lessen the trauma.
Looking back on things, going back to Fermi days, the nonsense started with unnecessarily complex performance states and the GPU Boost marketing gimmick. Remember 580M throttling, and matching P0 clocks with P1 clocks magically fixing things? There should only be two performance states for a GPU... full-on 3D and no-demand... on and off. Adding the different performance levels is just retarded. When you examine that rat's nest in the vBIOS with tools like NiBiTor, Kepler BIOS Tweaker or Maxwell BIOS Tweaker it's enough to make a guy want to puke. It does not need to be that complex, and making it so adds no value, IMHO. All it does is enhance the opportunity for malfunction and instability to surface.
I cleaned up my previous post a bit. I had a few typos from falling asleep when I was typing my thoughts.TBoneSan likes this. -
But I agree with Mr. Fox on the throttling. There is really no need at all for that to happen. Laptop makers should be able to artificially limit the clock speeds for the ultrathins if the cooling or power requirements aren't up to snuff, but make it a special SKU and call it something different like L970m or 968m or something to designate it's lower performance.
Problem is Intel has been doing this for years. Artificial throttle limits have impeded laptop performance for years. I think it would be OK if they did this as long as they offered an option for an "Extreme" CPU or GPU that gave users ultimate control over the clocks.Papusan, Robbo99999, Ethrem and 2 others like this. -
buuut we're not likely to get that, because the market has spoken, and the same thing I called is happening. Especially on other forums. People are hunting for stuff that'll last them 4 years playing BF4 at good graphics settings and "5 pounds" is too heavy. It's either a cheap-out with a Y50 or people are hunting Razer Blades and Aorus laptops. All of which have issues that don't need to be there. When people come asking for video render machines or anything that's worth its own salt in the CPU department, because that's what they want to do and gaming is a secondary, and I tell them "well, your choices are limited unless you want to deal with a throttle-happy CPU, as gaming is, despite what many think, 'medium-low' CPU load at best, and you're asking for 'medium' or 'medium-high' loads", I get JUMPED ON. Legit bashed, fought down, attacked, what have you, with a ridiculous amount of people saying "I have <insert crap machine> and it plays games fine!" or "I don't notice any issues!" or even the people who will
prepare yourself Mr. Fox
DEFEND the fact that machines aren't going to keep their speeds/do their job properly, by saying the following:
Manually lowering the turbo bins so it won't throttle is a good alternative
Allowing it to throttle, as it "won't affect the performance all that much"
Claiming that the user can't seem to scrounge up more than $900-$1000 for a render/game machine, therefore what they can buy is fine (never suggest to save up more, no, that's terrible. Yes, I understand some people need it NOW, but those are rare)
Insist that it's "just a laptop" and it doesn't need to work so well, and that if anybody wanted to do something, they'd buy a desktop
And possibly an even larger issue is that people don't seem to get that things are broken/deteriorating. As long as a laptop is thin (because maxwell's unnatural coolness automagically means all tech, now and forevermore, will be just as cool or even cooler, and that we will soon be having flagship mobile GPUs in sub 1" thick notebooks, without any heating or power issues at all, totally uncaring about any lack of storage options or soldered anything.
Or we've got the state when people have the "I don't notice this so it must not exist" issue with desktop cards, because heaven forbid any user who does more with their system than you sitting there playing CS:GO and BF4 and every title Ubisoft churns out that has "assassin's creed" on it sheepishly away from your circle of friends who openly bash Ubisoft, and has experienced and proved more issues that you probably don't notice. It's like the 970 issue. Any time I see someone looking for a 970 I tell them buy a 290X or save and get a 980. People are still harping that a 970 is a great card. Anybody who experiences the issues with them (notably dual- or more monitor users with Win 8.1 who like to multitask) is instantly bashed/shamed for ever even thinking about complaining about a LEGITIMATE defect in a card that hampers the way they use their PC. I said it once and I'll say it again: I cross 3GB vRAM so much it's a joke; if I had 970s I would have noticed something was wrong in about a week from launch.
What we have is a lot of people with money who are pretty spoiled and have a "if it doesn't affect me, it might as well not exist" attitude, and the anti-consumer crap we're speaking out about right now might as well not exist for all they care. It's why I keep saying I hope intel/nVidia REALLY screw over the desktop people the way the laptop users are getting it. I hope all of them need to hack their system BIOSes and vBIOSes to get what was once considered "normal" functionality, and that they all begin to complain. While I spend my time laughing at them and giving people a taste of their own medicine, maybe things might change for the better.
But there's one thing that's for certain, and that's that nothing will happen without competition. Intel could solder every single desktop chip to the board for skylake and if AMD's new CPU line has a worse IPC than bulldozer and doesn't come clocked at 8GHz out of the box, you can bet your next paycheck that people are gonna be buying skylake for gaming and anything that isn't pure productivity. -
Robbo99999 Notebook Prophet
-
The pantywaists that got us into this terrible mess, with their pathetic sniveling and whining about 5 pound 15-inch and and 7 pound 17-inch laptops being too big, thick and heavy should be spanked severely and sent to bed early, with no dinner. But, the evil OEM witches that thought it would OK to stop paying attention to mobile extreme performance enthusiasts should burned at the stake. Bring me my belt and a book of matches, LOL.
-
-
Mr. Fox likes this.
-
My GPUs already run at P0 doing video acceleration (like YouTube in a browser). I don't want my GPUs to have any option to throttle from being "too hot" because I want to be the one that defines what that means. I'd rather fix my own thermal problems and just have it run full blast until it shuts down my laptop once it reaches the thermal trip point. I have my CPU thermal management settings disabled or set to their maximum available values in the BIOS on my machines that support it. The thermal trip point is hard-coded to protect the CPU and cannot be modified, but throttling clocks to cool down is largely averted. It works perfectly like that, and I would do the same with the GPUs if I could. I also prefer driving a car with a stick shift instead of a slush-box tranny.
Having one in the middle like the old days would be OK, but I would still want to have absolute control over it rather than letting the GPU/vBIOS decide the best course of action on my behalf. -
That's cool Mr. Fox. Just remember that not everyone uses their laptop as a desktop like you do where battery life/heat/fan noise aren't considerations.
Ionising_Radiation likes this. -
Yes, I understand that. And, I agree there is a need for that flexibility. The main thing is I want everyone to have the option of exercising absolute control over their hardware if they want to. There's no reason both cannot coexist. The GPU does not need to make all of the decisions. Letting it have unilateral control over its own behavior without our input into the situation is part of a huge problem we are all faced with.
-
Me, too... it blows... real bad.
-
But with Intel going full retard BGA on mobile CPUs and nVidia trying to block overclocking, I doubt things are going to change for the better anytime soon.D2 Ultima likes this. -
-
-
Or, you know, it could be that GCN is so radically different from what they'd done in the 5 years prior that it's taken them time to figure things out and, as a result, they're still squeezing optimizations out of it
Mr. Fox likes this. -
Really, that's almost as crazy as what Pelosi said about not be able to know what is included in the Obamacare package until after the legislation was passed. How could AMD engineer something so radically different that they had to figure it out later? I guess that could happen by accident, which still means it took them too many years to figure out how to fix their products. That doesn't give me any warm fuzzies, even if they got lucky and figured out how to correct their own mistakes after many years of effort.
-
Fix? GCN was never broken. Software development is a gradual process. GCN was AMD's biggest architectural change since Terascale in 2007 while Nvidia went on a more gradual path from Tesla to adding double precision in Fermi to nerfing DP in Kepler to killing DP in Maxwell. I'm just thankful that AMD is still improving GCN 1.0 while Nvidia seems to have abandoned Kepler long ago in terms of meaningful optimization. Which is why they were evenly matched at release but now GCN is clearly faster as the numbers by n=1 showed.
As far as "mobile products still as pathetic as the day they were released," didn't 7970M make laptop enthusiasts all wet including yourself? -
GCN in a desktop has always been fairly decent, but that's not very relevant in a discussion about notebooks.
Their mobile GCN products have been plagued with issues from day one. Gradual software development isn't worth a lot if products frequently fail within their first 12 to 18 months of use. None of the 7970M rebrands have been anything to be excited about in terms of performance. Here we are, several years later, and the best we can muster is a "meh" for mobile GCN.Ethrem likes this. -
I think it's pretty clear by now that GCN just isn't made for mobile
-
-
What's the difference?
I was trying to convey that Maxwell and Kepler to a lesser extent are mobile focused--built from the bottom up. Nvidia has made this very clear. GCN is built from the top down. Opposing design philosophies.Robbo99999, TomJGX and Mr. Fox like this. -
@octiceps - I agree with you 100%. My only point is that improvements to GCN are not very relevant in the mobile realm, where GCN continues to leave a lot to be desired even though they have had ample time and opportunity to do something amazing with it.
Robbo99999 likes this. -
That's because with GCN, AMD never set out to make a great high-end mobile dGPU. There's a long list of reasons and history behind it including AMD's reasons for buying ATi in the first place, why Bulldozer was designed the way it was, and AMD's business strategy. Which I won't bore you with.
-
Fair enough, bro. Chances are, I wouldn't care what their reasons are anyway, so good call. + Rep for your patience.
octiceps likes this. -
Ol' Foxy is a results-focused man. The "why" it doesn't work doesn't help much XD.
Mr. Fox likes this.
Nvidia clockblock: vBIOS (unblocked in 353.00)
Discussion in 'Gaming (Software and Graphics Cards)' started by octiceps, Feb 23, 2015.