in the end, it doesnt really matter. 970 sales went through the roof for a good reasonthe fact that it has two memory partitions doesnt negate its good price/performance ratio...
-
It's still dishonest advertising at best. I'm not as outraged as I should be because I never intended for my 970s to be a long term, "futureproof" solution. I always viewed them as stop-gap cards and they still are to me. But I can see how someone who picked up the 970 hoping to get a few years out of it would be pretty peeved.
-
Regardless of the price, I was never particularly impressed at how much 970 was cut down compared to 980 when compared to previous trends of x70 and x80 GPUs on the same ASIC (770 and 780 don't count because GK104 vs. GK110). This is just further validation of that.
-
And now we know why the 970 is so cheap -- no reference design, coil whine lottery, and memory partition. Talk about a big pile of mess.
-
It reminds me of this article: Video Card Failure Rates by Generation - Puget Custom Computers
Now that's only from one AIB--ASUS--and mostly DirectCU cards, but perhaps the reason AMD cards fail more often is because they're cheaper. The AIB partners cheap out on PCB and component design to meet margins. That's why it's generally recommended to get AMD reference boards instead of non-reference ones.jaybee83 and moviemarketing like this. -
Also did people not catch on to the incorrect specs that are only being corrected now, 4 months after release?
-
And the 980M/970M are even further cut down from that. That's why *I* was pissed at it. But seeing as how maxwell is a power drinker beyond stock, a full 980 in a 120W envelope would have been incapable of OCing with current PSU tech. And it'd have been a laughingstock among enthusiasts (no matter how strong it was) if the cards were locked at stock.
because GPU-Z and other hardware checking programs read the specs as what nVidia published, with 64 ROPs and 4GB installed vRAM, etc. Unless someone was actively monitoring vRAM usage, they'd not have known, and since most desktop users know about as much about vRAM as they do about mobile CPUs, nobody worth their salt probably looked. Or anyone worth their salt enough probably stared at 980s or stuck with Titans/780Tis/etc. I could guarantee you if I had those cards in a system I would have noticed from day 1 with my overlays. -
Meaker@Sager Company Representative
Well the good news is that the ROPS on the mobile cards are fully intact (where applicable) so this issue does not impact us
Cakefish likes this. -
How do you know?
-
Meaker@Sager Company Representative
AnandTech | GeForce GTX 970: Correcting The Specs & Exploring Memory Allocation
GTX 980 1 Segment (4x8 MC)
GTX 970 2 Segments (4x7 MC)
GTX 980M 1 Segment (4x8 MC)
GTX 970M 1 Segment (3x6 MC)
GTX 965M 1 Segment (2x4 MC)Cakefish likes this. -
Wrong page, but I see it.
AnandTech | GeForce GTX 970: Correcting The Specs & Exploring Memory Allocation
Cakefish likes this. -
Meaker@Sager Company Representative
I listed the start of the article so people could read the whole thing
I then posted the important information in the table.
-
Oh, they could if they have time. Not everyone does, though, or cares enough to get into the nitty-gritty.
-
thegreatsquare Notebook Deity
-
Meaker@Sager Company Representative
Yes the TMUs are connected to the shader units so they get disabled as the shaders are. This is how it has always been stated, GPU-Z just has not been able to handle it properly.
-
That's TMUs, not ROPs. Not related to the discussion here, but 96 TMUs is correct for 980M. 128 is for desktop 980.
-
nono you misunderstand, I'm saying I'm surprised people aren't (or aren't as) outraged at nVidia passing around incorrect spec sheets to everyone for the past 4 months, and is only revealing the 970's true specs now after things have blown up in their face.
Myself I'm actually more p!ssed about this than the vram wall, because unless I understood the situation wrong, this is simply outright deception. There's just no reasonable explanation for why nVidia never bothered to correct the specs when the 970 was selling like hotcakes and sold out everywhere, but suddenly when feces have hit the fan 4 months later, they come out and correct the specs. -
Tell that to the duped 880M owners. It's been, what, 10 months and counting? 2014 was not kind to Nvidia.
-
Well, according to Anandtech's report, they said the engineering team told them and the marketing team didn't actually update it and nobody complained so nobody looked. Or something. That's pretty much the most concise way I can explain it.
As for the deception thing, it was supposedly a misunderstanding. Seeing as how their bloody website apparently lists the GTX 470's specs when you click on the GTX 480 and they have all their boost clocks set wrong? I would be inclined to believe that. But either way, people were indeed not marketed a card with 3.5GB fast vRAM and 512MB of slow vRAM. Also, due to the way the card is wired, it means that the game can switch to slow vRAM at 3GB being used by a game, because of windows and windowed mode/borderless windowed/etc. Hell, I'm sitting here using 631MB vRAM. I have chrome open and steam and teamspeak, listening to music and tweetdeck. I've got basically nothing open here. If I had a 970 this means if I launched a game like Titanfall I couldn't use insane textures, because it'd switch over to slow memory. Imagine, my mobile GPU outperforming a big bad 970 because nVidia wired it like frankenstein.
nVidia should offer something back to people who bought it expecting 4GB of fast memory. And should make sure people know about its issues in the future. -
They should have just sold the 970 as having 3.5GB vRAM and left it at that. It would be another distinguishing factor between 970 and 980. But honestly why not 8GB vRAM since their mobile counterparts have that much?
-
moviemarketing Milk Drinker
My guess is they were planning to wait 6-12 months before dropping the 8GB versions, why not get the same people to spend some more money to upgrade again? -
Probably because 3.5GB vRAM is a weird number nobody cares for, and it does technically have 4GB vRAM attached, and 256-bit memory buses are not supposed to have that multiple of vRAM attached (128MB, 256MB, 512MB, 1GB, 2GB, 4GB, 8GB, etc). It was a problem with the GTX 660Ti; a 192-bit memory bus is supposed to have 1.5GB or 3GB vRAM attached (why the 870M had 3GB and 6GB variants), but they added an extra memory chip that doesn't run at the same speed as the first 1.5GB on a single memory controller. Most people didn't know or care about it though. It's been done in the past, but I've never seen the amount of memory be "neutered" rather than "bolstered"... the 970 remains as I've called it a frankenstein-wired card.
-
Misunderstanding and deception is one fine line away from each other, it all boils down to intentions. After the 880M fiasco, I'm no longer willing to give nVidia the benefit of doubt.
In any case I am seriously p!ssed because I am very much a principles guy. I don't game at 4K and have never run into this vram wall, but I'm just as upset as those who are affected because the product has been very clearly (intentionally) misrepresented by nVidia, and that is absolutely unacceptable, and really grinds my gears. -
Even if you ran into it, your vRAM bandwidth is still doubled via SLI, so you would likely have noticed it far less. Then add that you're probably not playing the games that are so vRAM heavy, and even more so if you're on a single monitor so that your vRAM is snapped off when you fullscreen, you'd actually likely need to pass 3.5GB of in-game vRAM demand before you found an issue. So... Evolve would probably do it for ya =D. =D =D.
But I indeed understand your frustration and inability to give them the benefit of the doubt. Things should have been quadruple-checked before launches. Either way, they should have to pay for their mistake, whether it was genuine or not. -
Isn't vram simply mirrored in SLI so you end up with the same bandwidth? If anything I'd think those running multiple cards would be most likely to notice this issue, because for a single 970 you'll run out of GPU power long before you hit the vram wall.
-
In AFR, VRAM is mirrored but bandwidth is doubled.
-
You don't have to run out of it. Quite simple: set Shadows of Mordor to ultra with the ultra textures at 1080p on a single 970. You'll run out of vRAM *WAY* before you run out of GPU power, as that game is fairly easy to run.
Texture size (not render resolution) is the main hit for vRAM in games. Remember, I can maintain near 60fps in Titanfall on pretty much max settings on a single 780M using near 4000MB of vRAM:
-
Well at least the slow segment will be 56GB/s instead of 28GB/s lol
See I've always wondered, does SoM actually NEED/REQUIRE that much vram, or does it simply pull a Watch Dogs aka allocate but not actually use it? I ask because I've seen Watch Dogs hog 3.8GB vram at 1440p downsampled to 1080p (1.78x DSR), but never experienced this vram stuttering (outside of stuttering that came bundled with the game anyway
). Even with double bandwidth on the slow segment, its still about 1/4 of the fast segment, so I imagine it should cause some degree of hitching/stuttering/magical rainbows.
-
I don't think Watch Dogs' stuttering is necessarily due to VRAM when you have 3GB+, but due to its poorly coded texture streaming. If you recall, that patch near the end of last year fixed pretty much all stuttering on single GPU systems, but completely broke SLI scaling.
-
Honestly, I don't know. I know for sure that people with 3GB etc cards that tried maxing the game got real stuttering issues, and I believe people've determined that on full ultra at 1080p it uses ~5.4GB vRAM. I don't own it myself, mainly because as of december I am a NEET and broke, but either way it seems like they just... uprezzed like everything.
Oh and we all know Ubitoad is responsible for Watch Dogs' stuttering. Maldo's mod where he re-packs the "ultra" textures into the "high" setting for Watch Dogs eliminated LARGE amounts of stuttering for most people; I'm fairly sure that the game was coded to be terrible for people. -
Yeah I remember we having that discussion about whether it was malice or sheer incompetence.
-
wow, thanks for that link, extremely interesting site with tons of articles on my ToRead-list now
-
Add my guides to the list and grab a plate of food. Prepare for teh knowledge incoming.
-
been through ur guides already
-
XD. I update them a lot though. Check them back every few months XD
-
Meaker@Sager Company Representative
Part of the issue is while accessing that 512MB of ram it can't access the main part. -
PHEW!
NVIDIA might like to claim no performance impact but I am relieved that the 980M (and 970M, 965M) aren't structured in this way.
It's quite interesting to see that the 980M has more ROPs active and more L2 cache than a high-end desktop 970, despite fewer SMM cores enabled. -
Meaker@Sager Company Representative
It's not surprising really though, look at that spread of core configurations, they can practically use every chip they make, faulty cache and/or shader unit into 970, faulty shaders but still efficient into the 970M and 980M and decently power efficient but something really wrong around the core into the 965M.
-
Aaaaaaaaaaand there we have it, user testing proves the 970 is without a doubt, a 3.5GB card.
gg nVidia -
This is shaping out to be extremely funny. I wonder what nVidia has to say about this.
-
Too soon, man. I think n=1 is still pissed.
-
Well I'm pissed yes, but I do find comic value in nVidia's P-R doublespeak. Also have some popcorn ready to see what happens next.
Most likely nothing, people will forget about this in a month or two, nVidia will continue to outsell AMD 3 to 1 even though their cards costs more and their drivers are just as **** now
-
I`ve been gone for a while. Too much hassle going many pages back. What have we learned the latest days? Anything new?
n=1 pissed? Why? Dont the 970 perform like it should in the games? -
And this is why people shouldn't take P.R. numbers at face value. The average framerates above 3.5gb vram usage don't necessarily change that much (as Nvidia claimed), but the minimum framerates tank. Would be nice to have some FCAT numbers from some review sites as well just to be sure.
-
This whole thing is a mess.
I`ve seen posts and tests showing close to 4GB usage with 970 earlier.
The link that was posted earlier show 4GB usage with Watch Dogs and they didnt get any performance hit
Investigating the 970 VRAM Issue : pcmasterrace -
Yes I've experienced the same. But as someone pointed out, the issue only happens when the game actually NEEDS >3.5GB vram, and not just simply allocating it. Basically allocated vram =/= actively in use, thus why no issues when Watch Dogs gobbles up vram like mad.
-
The horse is probably dead, but I'm gonna beat it again anyway just in case
So we now have definitive proof that the 970 suffers from stuttering once vram requirement goes beyond 3.5GB. Have a look at the frametime graphs here:
Look at the frametimes for Watch Dogs @ 4K (bottom right). See how all those spikes occur in close succession? Yeah that's going to manifest itself as stuttering. I mean look at that graph, it's pretty much a block of spikes at 150ms or worse all the way through, Jesus Christ.
Google Translate of the last two paragraphs: (emphasis mine)
moviemarketing, octiceps and D2 Ultima like this. -
I still don't see how that's definitive proof. For one they use a poorly coded Watch Dogs game.
They run Ultra at 1080p to get 3.5GB
Then they run High at 4k to get 4.0GB
Who's to say that it isn't due to thrashing textures in and out of RAM? 980 shows spikes to 150 or so, just not as frequent. Maybe the 970 can't handle the 4k load as well. Are the video cards the same brand and is it consistent from video card to video card?be77solo likes this. -
Because it doesn't happen on the 980?
GTX 970 VRAM (update: 900M GPUs not affected by 'ramgate')
Discussion in 'Gaming (Software and Graphics Cards)' started by Cakefish, Jan 23, 2015.