Yeah, I mean I don't think we should get excited about bandwidth increases, I just think we need to see the increases to support the increased core computational ability of Pascal - the memory bandwidth is just a facilitator to the main party, as long as there is enough then that's all good.
-
Robbo99999 Notebook Prophet
-
Bandwidth *is* bottleneck for a lot of cool graphics stuff that's still just in R&D pipeline (whole deferred pipeline is heavily bottlenecked by bandwidth, virtually all modern AAA game engines are deferred).
These days you basically have plenty of shader computation available but are always scraping for bandwidth. A lot of techniques for getting prettier graphics are known but not used due to poor performance coming from not enough bandwidth. Extra bandwidth will allow better antialiasing, better global illumination, better volumetrics, better particle effects, better materials.
The thing is, as current GPUs are pretty weak bandwidth-wise, nobody right in their mind would dare to release game / engine which would require lot of bandwidth. You always try to match rendering techniques to hardware that's common for end users.
But things will eventually change once HBM2 becomes more common (starting with Pascal and Arctic Islands). It will take time, it's not enough that some HW is released, it needs to get big enough market penetration that it'll be worthwile to deploy bandwidth-heavy rendering techniques.
But once that happens, there will be massive performance and visual quality difference between GPUs that do have crazy bandwidth (Pascal, Arctic Islands and newer) and ones that don't (Maxwell and older). Basically even cheap new GPUs will be killing older flagships on new things (while difference will be much less pronounced on old things).
------------
TLDR: extra bandwidth is awesome and will make a big difference but software will have to catch up. You can't use games and benchmarks that exist today to assess impact of super-high-bandwidth future GPUs as nobody is targeting these capabilities yet.Last edited: Nov 4, 2015jaybee83, Scerate, HTWingNut and 1 other person like this. -
http://wccftech.com/nvidia-pascal-volta-gpus-supported-geforce-drivers-adds-support-vulkan-api/
The latest NVIDIA drivers now support Pascal - ID's are shown.jaybee83 likes this. -
Or, are we going to see the first simultaneous desktop and mobile launch? -
Considering Nvidia is just now releasing desktop 980, I wouldn't expect Pascal in mobile devices for at least six months. Why release mobile desktop 980 at 150W - 180W when Pascal will likely have same performance at 100W?
-
Computex 2016 (May 31st). That's all you need to know. Desktop is launched first, and mobile a week or two later. Happens every time.
The Chinese always get their hands on cards a month or two before launch. We will see benchmarks before then.
NVIDIA did launch the 965M at CES, so maybe we'll see something around then, too.hmscott likes this. -
Also, the desktop 980 performs like 30% better than the 980M? That's not much of an improvement. I'd expect the 1080M to perform at least 50% better than the previous 980M card it will be replacing, if not more. And we are expecting it to be much more. So, that's at least 20% better than the 980.
I imagine they'll launch 1070M / 1080M around Computex, and lower-end cards a bit earlier.Last edited: Nov 5, 2015Robbo99999 likes this. -
Well that's my whole point. They will look dumb if they release a 180W behemoth but then Pascal releases in a month running half the TDP and better performance. If Pascal for mobile was imminent they wouldn't have bothered with this desktop 980 nonsense.
King of Interns likes this. -
Robbo99999 Notebook Prophet
J.Dre likes this. -
Definitely a niche product. It was also an experiment. If I were NVIDIA, I'd want to test the waters. Maybe it will become a profitable venture, maybe not.
It doesn't cost much for them to throw a 980 in a couple machines. They probably invested very little in it, to be honest. The companies that sell these machines assume all liability. NVIDIA is paid before products launch (upon shipment via invoicing) - guaranteed payment, regardless, just for using their product.
Pascal is the new bread winner. They have invested a lot into Pascal, so I'm sure it will perform much better than any of us expect. But for now we don't know. It's only November, and driver support has arrived. They may plan to launch the first desktop cards as early as Q1 2016.*
*When we saw hardware ID leaks and driver support for Maxwell, they were announced within two months.Last edited: Nov 5, 2015 -
I personally cant wait to see how it works out with the 980's. if it works id hope we can see a continuation of it with the more power efficient pascal series, or at the very least have them push up the power envelope of the mobile GPUs.
From what i have been reading they have got msi, asus, clevo and the like investing tech and money in bigger/better cooling and more capable and bigger power supplies. Its my ernest hope that some of that developement transfers back into high end laptop market and not stagnate and die with the success or failure with the 980 for notebook project.
Sent from my SGH-M919V using TapatalkRobbo99999 likes this. -
And yeah, I know companies have to profit. But if you think about it, this behavior is limiting the advancement of technology and putting profiteering above innovation and economic growth. They are essentially slowing down the development of markets in everything and anything related to computing (e.g. software, games, etc.).
But hey, who doesn't love to make a killing.I'd love to see it, regardless. The transition to mobile gaming through the placement of desktop performance in a mobile platform is something I am in support of, especially since we spend much more per dollar for performance than our desktop counterparts.
Last edited: Nov 5, 2015hmscott, i_pk_pjers_i, jaybee83 and 1 other person like this. -
Asking for efficiency is asked nVidia to nerf performance like they did with Maxwell. All of us with Maxwell GPUs can confirm that when we remove the power limit, they use just as much energy as their Kepler predecessors and even worse, in some cases they actually draw more...
triturbo, TBoneSan, hmscott and 1 other person like this. -
Yep I bios modded the power limit to 405W on my 980 Ti, and in intense areas in BF4 (single player, yeah shoot me), I've seen the GPU hit its power limit and beg for mercy.
In most games I'm hovering around 85% power limit, so 344W. This is with a watercooled 980 Ti btw, so I'm already shaving off 20W just because the GPU runs so much cooler. Before when I was on air the GPU was constantly hovering around 90% power limit. And people thought the 290X was power hungry LOL. (obviously the 980 Ti has a lot more performance and perf/watt still kills the 290X, but just because it's efficient doesn't mean it doesn't suck down power like crazy) -
And perf/watt dynamic flips upside down when context switching is used on a 290X so nVidia needs to keep paying devs to break AMD GPU performance...
But at least you proved my point... what was the wattage before? 270 and throttle? (Drawn from the wall) -
-
-
King of Interns Simply a laptop enthusiast
Why not square up fairly vs AMD and actually compete as they should shoulder to shoulder.
Too many many tricks; power management trick, high tess trick etc.....I really don't want to buy their cards BUT amd are nowhere to be seen! -
-
-
-
King of Interns Simply a laptop enthusiast
Lol! I guess people want AMD dead and buried. Then Nvidia can do completely as they please.
Well even more so... -
TomJGX and King of Interns like this.
-
Default bios gives 275W, and up to 300W with 109% power limit. I can say with confidence you NEED 109% or 300W to even maintain the factory OC (1328/7100) in some games, that's how ridiculous it is. I was genuinely shocked by how much power it could suck down, even with the knowledge that Titan X throttles at stock:
I think it's pretty clear nVidia pushed waaaaaay past the optimal point on the power/perf curve to get the performance we see out of GM200. I know we can make GK110 draw 400W+ too, but that's with extreme OC like running 1.35V+ and over 1400MHz core, which frankly you could only do on Classified/Kingpin cards anyway unless you wanted to literally set your card on fire. Here I'm simply using an MSi 980 Ti Gaming (not Lightning) and I'm not even running a crazy OC -- 1530/7800 @ 1.25V is pretty tame by any standards.
IMO GM204 was at least more or less still in the optimal zone on the power/perf curve (if not a bit close to the edge), but me thinks to get GM200's performance there's simply no choice but to completely go outside that optimal zone, hence the power consumption.
The throttling algorithm in Maxwell is extremely complex, apparently there's a hidden hard throttle at 65C baked into the bios. I say "hidden" because some voltage sliders were missing in the GM200 bios when looked at through MBT. I should've bookmarked a particular post but basically, the 2nd or 3rd slider is for "throttling voltage", and that's the one that controls what voltage the card will throttle down to once past 65C. That slider is extremely important because apparently it serves as an override switch, and without that particular slider, any volt mods you do in the bios ends up being useless.
And just to make things worse, apparently there's a Gigabyte 980 Ti specific throttling bug that occurs once you overvolt the card past a certain limit. I think it's pretty clear there's a very complex set of throttling rules for Maxwell, and imperfections in the implementation manifest them as bugs mentioned above.
Don't even get me started on the SLI voltage bug issue if you have cards with different ASICs (7-8% is enough to cause the bug). And the only way to fix the bug is to mod the bios and stop this voltage throttling nonsense. Of course.Last edited: Nov 8, 2015 -
King of Interns Simply a laptop enthusiast
I am not routing for AMD I only care they survive to entertain the slim hope that they might be able to keep Nvidia in check... -
I think AMD's problem is they're always making stuff "for the future" because "the future is X" rather than making products for the now. The Fury GPUs with HBM is also a good example, although there they might've needed the power savings with HBM+memory controllers in order to make that ridiculous 4096 shader chip, so at least there's some rationale.
i_pk_pjers_i likes this. -
Trouble is, by the time DX12 games become mainstream, the Fury X will have been succeeded by one, if not two, new generations of flagship GPU.
Sent from my E5823 using TapatalkTomJGX and i_pk_pjers_i like this. -
But AMD will also have at least a year or two of experience with HBM technology to be able to use it better when it comes to implementation.
Though, Nvidia does have the financial resources AMD lacks, so its possible they might be able to compensate... or not. -
-
Arctic Islands are supposed to sport an entirely new architecture in addition to being twice as energy efficient compared to the current Fury line and on a new manuf. process.
So... no rebrands this time around (although, one cannot really call the Fury line rebrands when you take into account they managed to get very close in energy efficiency and performance to Maxwell -
Last edited: Nov 10, 2015TomJGX likes this.
-
triturbo likes this.
-
Sent from my SPH-L720 using Tapatalk -
Hexus released another article ( here) about Pascal, but it's just a summary of what we know.
AMD's Greenland GPU (14nm) tapes out for 2016 to compete with Pascal. - SourceTomJGX and PrimeTimeAction like this. -
PrimeTimeAction Notebook Evangelist
-
Desktops will be seeing up to 16GB of VRAM and 1TB/sec memory bandwidth
http://vrworld.com/2015/11/16/nvidia-unveils-pascal-gpu-16gb-of-memory-1tbs-bandwidth/
If they run the same route with the bandwidth on mobile, its definitely the death of current MXM tech. -
why would it be the end? doesnt mobile pcie have the same or similar soecs?
Sent from my SGH-M919V using Tapatalk -
They're probably going to milk mobile and use GDDR5X for a year.
MXM may stick around with GDDR5X. We don't know, yet.jaybee83 likes this. -
King of Interns Simply a laptop enthusiast
Mobile parts are always slower than desktop. MXM might stretch until volta....or perhaps like when the switch from mxm 2.1 to mxm 3.0 happened pascal and greenland will come in two variants.
Anyway what are the limitations of MXM for single card configuration at least PCI 2.0 x 16 still has plenty of bandwidth let alone PCI 3. I read a review that a GTX 980 (single) only saw 2-3% increase in performance due to better latency of PCI 3.0 over 2.0.jaybee83 likes this. -
i cant see them just dropping mxm. as well, if i recall correctly (not 100%) im fairly shure nvidia said all pascal will be hbm2.
Sent from my SGH-M919V using Tapatalk -
-
GDDR5X are more than enough for us gamers.
4GB HBM1
512GB/s
8GB GDDR5X @ 1750MHz @ 256bit bus (Mobile Pascal)
~450GB/s
8GB GDDR5 @ 1750MHz @ 256bit bus (Mobile GTX 980)
~220GB/s
Twice the bandwidth we have now. Absolutely nothing to complain about and frankly an amazing increase.
Nvidia might save HBM2 for GP100, meaning the cards oriented for computing and enterprise workloads. These users can never have enough bandwidth. HBM2 will be scarce anyway. Much smarter to save them for the cards that actually need HBM2.TomJGX, jaybee83, King of Interns and 1 other person like this. -
King of Interns Simply a laptop enthusiast
Factor in the OCing usually possible with vram and it ends up at HBM1 bandwidth levels!Cloudfire likes this. -
We don't know how much headroom GDDR5X will have for overclocking though but yes, double the bandwidth is plenty.
-
yupyup, oc headroom with hbm1 on the fury line is already pretty tight...
Sent from my Nexus 5 using Tapatalk -
I'm sure it'll be a nice improvement over Maxwell. It just won't be what we were all hoping for. They seem to be purposefully dialing back everything on the mobile front to maintain the "performance edge" desktops have always had, and are now introducing desktop parts into laptops, profiting twice from them.
Money, money, money. Team green is green indeed. Shareholders are priority #1. Stock prices are the highest they've been in years. -
wonder if theyll just release a "regular" mobile flagship / 980M successor to be positioned right in between the 980M and 980 (30-35% is a large enough gap to be filled with a gpu) and only later go all out with the real flagship as a 980 mobile successor...
after all, that would be following exactly their desktop lineup with "mainstream" 980 flagship and Ti/titan "highend" flagships
Sent from my Nexus 5 using Tapatalk -
If they do, I'm done. New hobby for sure. Been waiting more than a year for the next laptop.
Don't think they will. It would be hard to be less than 50% over Maxwell. -
um....when was the last time we had a larger than 50% jump in one gpu gen?
the closest that comes to mind was from 6970M/485M/580M to 7970M/680M and that was around 50% iirc...
-
Yeah but look what they achieved with Maxwell without even a die shrink. Pascal is both a die shrink and a new architecture.
Pascal: What do we know? Discussion, Latest News & Updates: 1000M Series GPU's
Discussion in 'Gaming (Software and Graphics Cards)' started by J.Dre, Oct 11, 2014.