They should still keep the basic clock, like... forcing max should hold 1038 minimum for your cards. On my kepler if I OC to 1006 and I force "max power", it sits at 850 until it NEEDS to clock up. It annoyed the ever living crap out of me in BF4 prior to the performance update, because it'd calmly use 50% of my GPUs and not clock up but go below the 125fps I set it to.
-
-
Fiji is supposed to use slightly less power than Hawaii and have better thermals too (it's a bigger chip with more surface area after all), so I expect it to overclock better than 290X. Also I think the 2x 8pin power is for the 8GB 390X WCE, because it takes more power to run the pump plus the radiator fan than a regular air cooler, so they would want enthusiasts overclocking the WCE and they need to leave more headroom.
-
And Titan X is a full GM2oo, Gm200 is exactly a GM204 x 1.5, do the math. AMD won the war this round and there is no denying it, best nVidia can do is release a 985Ti with higher clocks. -
Link4, you're speaking VERY AMD-loving right now, and not making a whole lot of sense about it, especially before AMD's cards have even come out. Hold off with the "facts" about winning and losing and efficiency until we see things come out. Everybody here has been speculating all day; nobody has said anything definite, and there's been pros and cons to both companies.
-
Kind of telling how when I modded the voltage table to run 1.25V as soon as any load was put on, all stability issues instantly vanished. Of course efficiency also went out the window and the TDP numbers don't look as nice anymore. Not that I give a damn though, but kinda makes me suspect this is yet another trick nVidia pulled with Maxwell to make it look more impressive from a perf/watt standpoint at the expense of 100% stability.Last edited: Mar 16, 2015 -
As for pros and cons, well all I see is more and more cons from nVidia recently. Sure there is a chance that it will overclock better but that alone isn't a good enough reason for me to get their flagship. -
I hope AMD wins, mind you. AMD has been trying this whole time, but nVidia is too complacent. I need them whipped into shape. Intel too. -
So to further elaborate on what I said above, these pictures showing the boost and voltage tables of my Gigabyte 970's stock vbios should help to make things clear.
Let's start with the boost table:
This one is fairly straightforward and doesn't really need much explanation. The only thing to note here is that the clock states #35 through #74 - highlighted in yellow - belong to the P0 state.
Now let's look at the voltage tables:
Do you see the problem? No? Well each clock state has a fairly wide voltage range, starting from about 106mV (CLK 35), all the way up to a ridiculous 219mV (CLK 60).
Ignoring the SLI voltage discrepancy bug for now, when running each of my 970s solo, one 970 boosts to 1380 (CLK 63), and the other 1405 (CLK 65).
Now look at the corresponding voltage entries for CLK 63 and CLK 65.
You can see each state has a defined upper limit of 1.281V, while the lower limit is 1.075V for CLK 63 (1380 boost), and 1.081V for CLK 65 (1405 boost).
As you can imagine, trying to push almost 1400MHz on the GPU core with a measly 1.081V is going to end in tears. Now what I don't know is how exactly, through what algorithm, the vbios picks the exact voltage to use for each clock state. Actually on second thought, the algorithm is most likely programmed into the driver, and the vbios simply delineates what voltages are "allowed" for what boost clocks. In any case I'm going to wager a guess that it adjusts the voltage dynamically based on load. This constant, rapid micro adjustment is the source of Maxwell's efficiency.
Unfortunately, this is also its Achilles heel. Because the voltage range for each clock state is set so wide, sometimes the voltage simply gets stuck at the lower limit, and doesn't ramp up fast enough to keep up with the GPU core, which results in crashing.
From my own experience, this is especially prone to happen right after a non-demanding cutscene, where the core is basically chilling out, and then immediately thrown back into action after the cutscene ends. What typically happens - as I've observed from Afterburner's OSD - is that the boost clock shoots right to where it should be due to the suddenly increased load, but the voltage is either stuck at the lower limit of that particular clock state, or worse, stuck in the voltage range of a lower clock state. This is what I meant by voltage/boost table crossover.
Suffice to say the only fix to this dynamic throttling garbage, is to clamp both the lower and upper limits to the same value for each clock state #35 through #74. I set mine to 1.25V, so this way it pretty much guarantees that no matter where the GPU is at in the boost table, it will always be delivered a constant 1.25V of power.
Hopefully I explained that well enough for people to understand. Maybe I should start a separate thread on this... Done.Last edited: Mar 16, 2015 -
That's a shame. Sorry to hear of your troubles. It doesn't affect the 4GB version of the 980M though. It has never crashed the driver on me at stock clocks, ever. It has also never forcibly downclocked itself. Always stayed rock steady at max boost clocks, no dips whatsoever. I suspect that's because I have 60Hz display as opposed to your 120Hz.
Anymore rumours concerning mobile GPUs? Desktops are all well and good but they are not compatible with my life at the moment, so I find it hard to get really excited about them unless I know I can get my hands on these technologies in mobile form factor. 390X looks a bit too power hungry for mobile. I hope the mobile flagship also has HBM though.Last edited: Mar 17, 2015 -
600mm2 lol
-
-
No I see those problems when running games that don't support SLI. Could it be a desktop only thing? Maybe, I don't know.
For the love of everything holy DO NOT get me started on the SLI voltage bug.TomJGX likes this. -
-
Oh I see, SLI only? That makes sense why I've never encountered it.
Hey, there's always DX12 that could fix it. Maybe. Knock on wood. -
How would DX12 fix a firmware bug? -
-
-
-
Alright let me clear up a few things here.
1. The issue I'm talking about happens on single cards. The SLI voltage bug is an entirely separate issue together.
2. The issue happens AT STOCK CLOCKS. Overclocking simply makes it worse but it definitely happens AT STOCK.
3. There's circumstantial evidence to suggest ASIC quality plays a role in this bug. The worse the ASIC, the higher likelihood of getting this bug because default VID needed to sustain a certain clock state is higher.
4. I speak only for desktop cards.
When I said I needed to downclock by 30MHz in Wolfenstein, I meant downclock FROM STOCK. Trust me when I say even if I put a 10MHz OC, the game simply becomes unplayable and crashes anywhere from 10 seconds to 1 minute after a cutscene finishes.
There.Last edited: Mar 17, 2015 -
Oh really? Well 69.9% ASIC appears to be high enough so that I never see crashes at stock clocks.
Only when I've overclocked have I ever got crashes. -
ASIC is just one of many potential factors.
Plus I still don't know if the driver handles desktop and mobile GPUs differently. -
Mobile has the same behavior. It's really irritating. I really do think that 880M was a beta test of Maxwell "efficiency" algorithms. It does drop below reference clocks as well in games that don't always need the full power which results in retarded stuttering in games that aren't optimized well - the F.E.A.R. series is a prime example off the top of my head. I was playing the first one the other night maxed out and it ran the core in the 800s on the cards most of the time but when fire effects or something appeared on the screen, especially Slomo mode that you use all the time, it stuttered as it struggled to get the clock rate up to the 1000s to match the load and then fell again. I'm hopeful the issue has been addressed or will be soon in future drivers but for now it's annoying at best, worst case scenario it gets you killed because it spikes input lag through the roof.
If AMD comes out with something that beats the nVidia cards with a decent TDP and great price I'm saying goodbye to nVidia and going team red.TomJGX likes this. -
Yet to see anything like that with my 980M. They must have fixed it since the 880M disaster.
-
n=1 likes this.
-
The game most definitely plays a role. Ironically the less demanding the game the worse this issue gets. I suspect the algorithm just doesn't know what to do when the card doesn't need to go full boost, and everything just falls apart.
Ethrem likes this. -
-
Crank up the supersampling then
-
Never noticed an issue with my older games like Half Life 2 or less demanding titles like Insanely Twisted Shadow Planet.
My 980M is rock solid in all situations on stock clocks *knock on wood * -
lol damn next time start with "I have Prema's vbios mod"
Like I said if you fix the messed up voltage table in the default vbios all the issues go awayocticeps likes this. -
I have Prema's vbios mod as well... Still an issue.
I suppose it could be an SLI issue though. -
-
Any indications on whether the high end mobile discrete GPU's from AMD will have HBM?
I'd love it if the R9 M390X had HBM.
With the kind of bandwidth HBM is able to provide, maybe a 4GB top end mobile GPU would be possible? -
TomJGX likes this.
-
Recent fresh rumors from China:
R9 M390X and a bunch of other mobile 300 cards will be presented in June at Computex. M390X will have 8GB VRAM. No mention yet regarding HBM or not.
Mr Najsman, Link4 and D2 Ultima like this. -
Last edited: Mar 21, 2015
-
Do you believe AMD's new flagship can actually defeat GTX 980M?
-
-
If they have a mobile chip that can produce at least half the power of the 390X then they can beat the 980M, just like Fiji beats GM200, and at lower clocks too.
-
As much as I want AMD to win, it's a bit premature to say Fiji beats GM200 at lower clocks. No those Chiphell leaks do NOT count as credible sources.
Mr Najsman and D2 Ultima like this. -
This one is a much more credible leak, and very likely an AMD slide too.
This slide was from AMD's in house presentation on March 16th (very unlikely to be fake) and from this alone we can expect 55-60% performance improvement over 290X in most cases. That's just not the case with the Titan X, you are very lucky if you get even close to 50% increase over 290X. Also the chiphell leaks may not be credible but they were very accurate when it came to the TitanX (and some benchmarks camen much earlier than it launched) and looking at the slide above they are also close in performance for 390X too (they had the 4GB version not sure which one AMD used in the slides). The leaks may be real or fake, but the performance level is not fake. -
Right, because we should draw conclusions based on leaked "in house" presentations that only show relative results using a skewed y-axis and with no actual performance metrics.
What are those numbers referring to, min FPS, average FPS, max FPS, power consumption, heat output? We don't know. Hell it could be "magical rainbow unicorns rendered per second" for all we know.
(actually, if you look real close at the bottom right corner, you can just make out what it says: "Based on performance estimates". That's right, those are ESTIMATES, probably derived from "marketing math" LOL)
At the very least none of the slides say anything about clockspeeds, so at least refrain from making the "at lower clocks" conclusion.
As for the Chiphell benchmarks, they're all fake. If their prediction was close it was nothing more than a lucky educated guess.
Until we have actual benchmarks stating the 390X is so and so faster than Titan X, I maintain we don't jump ahead of ourselves. Stating that the 390X might be faster than the Titan X IF the leaks are all accurate is fine; outright stating it as fact is not.Last edited: Mar 22, 2015D2 Ultima likes this. -
Never mind, I found the answer:
-
-
Then stop wording them as if they were facts.
"Fiji beats GM200, and at lower clocks too" is an unambiguous statement of fact and means Fiji is already out.
I'm also not fond of your habit of pulling arbitrary numbers and percentages out of thin air.Last edited: Mar 22, 2015D2 Ultima likes this. -
-
Ironically the real Link is fully clad in green (as is this smiley -->
)
Last edited: Mar 22, 2015 -
blasphemous
R9 370 Details leaked - Finally a new architecture from AMD?
Discussion in 'Gaming (Software and Graphics Cards)' started by Cloudfire, Mar 11, 2015.