Voltage variance + heat, maybe? Less heat = less power drawn (even if only a little) and the voltage variance is unstable for readings like that anyway.
Aliens.
Aliens and mad cow disease.
-
-
killkenny1 Too weird to live, too rare to die.
The answer to this mystery:
-
Seen a rant about AMD cards being able to heat up an entire syberian village.
I begin to think it might have been right.
How does it consume that much power?... And it is not up to 1080 levels... sad. -
Mhmm, and how the 1080 (a 180W GPU), has twice as large cooler as R9 Nano (a 175W GPU) and runs hotter. Miracle, yes?
-
That isn't that difficult to explain. Heat generation has a lot to do with power consumption but there are other factors that influence heat generation. And if memory serves me well, R9 series were liquid cooled, while nano variant is noisy. So very noisy. Not to mention that it not getting hotter doesn't mean anything in reality, given that AMD could just have placed the temp measurements not directly on chip, but near it, and then it would report lower temps.
And I don't want to defend nvidia either. I think that both have done a lot of things that aren't that good, but nvidia nailed it performance wise with 1080. Amd provided 80% of that for 200$. Now we have both amazing and a cheap variant
. I'm happy with this development
-
Actually, only the R9 295x2 and R9 Fury X had official liquid coolers. The rest relied on fans.
-
Yea something is off there...R9 nano actually uses 75 watts
Sent from my Nexus 6P using Tapatalk -
Gov. Rick Perry Notebook Consultant
Movie/Web/Idle consumption may be 75W but no way its 75W on load. -
75W! Lololol! Nano is 175W, 75W would have been the ultimate laptop chip...
jaybee83, Georgel, Kade Storm and 3 others like this. -
Guys, please check out the E3 2016 thread @ Mr Najsman started with a list of all the E3 2016 Streaming / Events here:
http://forum.notebookreview.com/threads/e3-2016.792687/Last edited: Jun 12, 2016Georgel and Kade Storm like this. -
Some more info about GTX 1060 (seems to be coming with 192-bit memory bus and 6 GB of GDDR5):
http://videocardz.com/61002/nvidia-geforce-gtx-1060-to-feature-6gb-192-bit-memory
Performance is supposed to be between GTX 970 and 980.
I guess that's the GPU that will replace GTX 970M (in slim notebooks like MSI GS63 / Razer Blade / Gigabyte P34/P35), whichever name they will use - GTX 1070M or GTX 1060 for notebooks.Georgel likes this. -
Do you even check what you are writing? The noise levels are the same! Unlike the radiating area, which is 2x in favor for the 1080. Pascal is nothing the brochure wants you to believe. Even the performance is a result from some cheap tricks and I'm waiting for the day when it would be revealed (if ever).
-
Can you elaborate on what these cheap tricks are? I'm not being argumentative, genuinely interested.
Sent from a 128th Legion Stormtrooper 6Phmscott likes this. -
Ionising_Radiation ?v = ve*ln(m0/m1)
Well-said. Pascal is to Maxwell what Broadwell/Ivy Bridge was to Haswell/Sandy Bridge - literally a die shrink of the same architecture with minor improvements. The noticeable "improvement" of the Pascal GPUs is due to the extremely high stock clocks (1.7 GHz on the 1080 vs something like 1.2 GHz on the 980?) and higher TDPs than their Maxwell equivalents. Clock both the 980 and the 1080 equally, and see how well the two perform - there'd probably be barely any improvement except in thermals.
@hfm - here's the answer to your question. TL;DR: The Pascal GPUs have significantly higher stock clocks and higher TDP requirements than their equivalent Maxwell counterparts to bring about such large performance improvements.hmscott likes this. -
So the 1080 has higher TDP than the 980? That's counter to everything I've read. Being harder to cool due to reduced surface area of a smaller die, I could see that because physics.
Sent from a 128th Legion Stormtrooper 6Phmscott likes this. -
Ionising_Radiation ?v = ve*ln(m0/m1)
Yes, it does. Even the nVidia GeForce website says so:
GTX 1080: 180 W TDP
GTX 980: 165 W TDP
15 W isn't necessarily such a large difference, but then again, ULV/ARM SoCs can fit into that gap and eke out some decent performance. Of course, the 1080 beats out the Titan X which has a TDP of 250 W, but comparing apples to apples, we need to keep clocks the same for all cards to see how well each GPU truly performs with respect to each other.hmscott likes this. -
Ahhh. OK. I could have sworn I saw the opposite. The performance improvement is definitely higher than a 9% TDP increase.
EDIT: it was the 980ti I was thinking of. It's definitely beating that as well with a large TDP decrease. I still don't see the "cheap tricks".
Sent from a 128th Legion Stormtrooper 6PLast edited: Jun 13, 2016hmscott likes this. -
Ionising_Radiation ?v = ve*ln(m0/m1)
Of course, but the 42.5% higher clocks (1733 MHz vs 1216 MHz) probably account for half the performance increase. If both GPUs' clocks were kept the same, I'd be willing to bet that the 1080 would be approximately as good as or even worse than the Titan X. That would be disappointing, and that's the real story: Pascal is a disappointment. As I've already said before: people who purchased the FEs early at extra cost just jumped on the hype train and spent their cash on a series of GPUs that aren't even as good as they were supposed to be. We were supposed to get HBM2 by Pascal, but it's nowhere to be seen.
To me, Pascal was everything Maxwell was supposed to be, minus a bit. There's still no sign of consumer NVLink, and to me, the hype train is fast turning into a sharp corner where it's going to get de-railed.
Hell, the GTX 970 made a bigger bang (in terms of price/performance and performance/power ratio) than the 1070. All in all, a disappointment. I won't be buying the 10XX series of GPUs. On the other hand, to a person who's never used AMD GPUs, the RX 480 looks like a hell of a deal (970 to 980 performance for 950/960 prices? Sign me up!)CaerCadarn and hmscott like this. -
That perf jump over the 980 with a 9% tdp increase can't be explained away by cheap tricks. (Now I'm debating
).
Sent from a 128th Legion Stormtrooper 6P -
Ionising_Radiation ?v = ve*ln(m0/m1)
@hfm - with all due respect, did you not read my first post the first line of my previous post? -
Good idea, lets get someone to overclock a 980 to 2GHz core and 11GT/s memory and see how they compare. With air cooling of course.jaybee83, Mr Najsman, Robbo99999 and 1 other person like this.
-
Robbo99999 Notebook Prophet
Haha! (The point that's been overlooked!). (Like you) I don't see any 'cheap tricks' with Pascal, the performance improvements are real, it doesn't really matter if they managed to do that by higher clocks than Maxwell or not. I wish they'd given us bigger chips with more cores and maybe slightly lower clocks which would mean greater performance & more overclocking headroom, but it's still a good improvement over Maxwell.hmscott likes this. -
400 pages later and we still don't know anything about mobile Pascal. Not for certain.
I'm not making the Volta thread, so...
jaybee83, Kade Storm, hmscott and 2 others like this. -
People know things already. They just can't say anything.
Go on.. make a Volta thread.. I know you want to
jaybee83, hmscott and Mr Najsman like this. -
PrimeTimeAction Notebook Evangelist
-
Of course. For example the very nail that confined nGREEDIA for good for me - a brief story about GameSux. When the news broke back in the day (pCARS and The Witcher III), I was a couple of weeks away from getting a 780m. Well, I did a quick guess on where things are heading and low and behold, a year later I was more or less right. I'm not saying I knew everything right then and there (I wouldn't have guessed that they'll go so low as to cripple their own hardware), but I guessed half of it. Have you seen which titles they used for the PowerPoint? I bought Rise of the Tomb Raider just because I felt obligated, since I paid almost pocket money for Tomb Raider 2013 and it was a great game, so I thought the developers deserved it. Well, I do understand that my GPU is pretty old, but still the ROTTR runs at really low resolution (lowest) and really low settings (again, lowest), while the TR2013 ran @1920x1200 High. Needles to say ROTTR looks like ****, while TR2013 looks close, not to say the same as to what I can see from gameplay videos of ROTTR at high settings. What happened? GameSux, that's what. Why so sure, because the engine is the same - Foundation, I'm not the one saying it, the developers themselves confirmed it.
The other trick, the one I'm thinking would go unproven for a while and I already mentioned earlier in this thread - colors. Almost a decade ago I was a Symbian user, I had multiple Symbian phones, but one of them was N93, it had TI OMAP2420, just like the N95. We compared SPMark (reworked 3DMark for Symbian) benchmarks and the N93 always scored better, no matter the firmware, which N95 received quite a few, while N93 only one. In all cases it was ~30% higher score. Again the hardware was exactly the same. The difference - the display. N95 came with new and shiny display. OK, but are you 100% sure on it. Well yes, because there was a trick to run SPMark in background and the result... you could probably guess it - the same within margin of error and higher than what N93 did, let alone N95. So is it relevant to PCs, I think yes. Is it a ~30% performance hit relevant, I really don't know, but the fact is that Maxwell has color compression and AMD tried as well, not sure if it will be featured in Polaris, but they do claim HDR support. So it would be either tweaked, or removed altogether.
The third - power management. I think we all know about this one. Why 1080 is NOT 180W, same reason 980m is NOT 100W - they get a fictional average, having little with the truth.
There's probably fourth and fifth, knowing nGREEDIA, who knows.
They also march to ditch MXM, made by them, was an open standard, made it foundation or something and now going to ditch it. Maybe that's the breaking point, they realized that money can't be made from open standards. Still wonder to this day and would continue to do so, how they came with MXM and how they turned whatever they are now. Anyway, as I said, I was willing to give them a chance, but after GameSux they just went through the floor and reached for the lava, hopefully burning in a few years.Last edited: Jun 13, 2016jaybee83, Ionising_Radiation, Georgel and 2 others like this. -
hearing about this just makes me wanna barf
I hope they actually **** up even harder with their marketing tricks so mainstream ppl actually notice it.
-
-
Your Nvidia hate is so juvenile. All of these words and you literally stated zero facts or information that can be corroborated.
You're the Donald Trump of NBR. So many words, yet so vapid and devoid of reality based thinking.tribaljet, Shadow God, hmscott and 2 others like this. -
Donald Trump of NBR.
Love it.
tribaljet, Shadow God, Woodking and 6 others like this. -
killkenny1 Too weird to live, too rare to die.
Make laptops great again!MahmoudDewy, triturbo, jaybee83 and 10 others like this. -
Maxwell power management numbers can be inaccurate. Maxwell TDP is very much ballpark feature at best depending on the vbios/load.
-
Higher clock doesn't matter, the bottom line is that it's getting insane performance increases for roughly the same TDP. That's not cheap tricks, that's a straight up better piece of metal. So we can't increase clock speed with roughly the same TDP for massive gains and be legitimate? How would you architect it and launch it at a price someone would actually pay at scale?
Bottom line, watt for watt it blows Maxwell out of the water. That's a better GPU. It's like no matter what they do you won't be happy with it. Why would you just wallow around this hobby in disgust and misery. Pascal is the best we've yet seen by far, games are going to fly if you're willing to fork out for one. And we have AMD coming in with the 480 for a fantastic price. It's a good time to be someone that likes to unwind to a nice game.
Shadow God, Mr Najsman and hmscott like this. -
6 Teraflops of compute, 8 Core CPU, Internal AC adapter, full 4K and VR - XB1 S, There goes my poor 980M sitting at 3.2TFs but the Teraflops doesn't show the entire picture yet gotta see how games run on that & Sony will soon unleash their beast.
Now a true 1070M would rekt that even it has a gimped DT 1070 and if a 1080 based MXM card gets released It'd burn that abomination to ashes.Last edited: Jun 13, 2016hmscott likes this. -
Ionising_Radiation ?v = ve*ln(m0/m1)
Not necessarily. The 1080, in real use cases, is at best less than twice as powerful as the GTX 980 (70% improvement) - not exactly Moore's Law, is it? I still wager that if not for the insanely high clock rate, we wouldn't get such a performance boost.
HBM is also missing.
Also...
More like they can do way better, but they want to milk consumers by artificially limiting performance. Plus, I'll be happy if the GTX 1060 actually out-performs a GTX 980 instead of sitting in between the 970 and 980. As you said, I might as well get an RX 480 for that.
EDIT:
You also said:
Nvidia milks customers here too. They re-named a reference card, added a $100 surcharge and people still bought the GTX 1080 FE. Nvidia's mid-range has long been neglected - we just need to look at the performance gap between the X60 and the X70 series that need not be there.
On the mobile side, the X50M and the X60M GPUs perform within 10% of each other, and the gap gets ever narrower further down the GPU line. Not everyone wants to (or even can) pay top dollar for a GPU, and sometimes you just can't help but feel that something better could be done.
I hardly need even mention that nVidia has a monopoly in the discrete GPU industry, so it's going to behave like any other private monopoly - profit-oriented.Last edited: Jun 13, 2016 -
Proof they are artificially limiting performance?
Of course they are. EVERY company (AMD, Apple, Burger King, Tesla, etc.. etc..) is profit-oriented. They aren't going to sell something at cost, and they aren't going to leave money on the table for no reason.Shadow God and hmscott like this. -
Ehem, having a Gx104 part as for the x80 card? Charging what used to be in the 200-300usd range for now 550-700usd is a bit... aggressive...
-
this is not 260m/280m/285m stuff the gap is large and the price will be low. 1300 cad 1000 usd for a 980m level laptop by just waiting a couple of months count me in
Eclipse251 and hmscott like this. -
Well, you need to understand something else. The TDP listed is "rated" TDP, not "actual" TDP. The 1080 can hit power target limits at stock with... what was it? 110%? 120%? Power limit target via overclocking software. This means that either 198W or 216W as its limit (I really don't remember if it's 110% or 120% the max limit for the stock cards, sorry) is not enough to keep the card from throwing throttle flags in some games, at stock. On the other hand, the 165W 980 to my knowledge never had a TDP limit causing throttling in games on launch, far less increasing it from stock. EXCEPT when forcing voltage to a constant with a custom vBIOS for the 980, at which point it easily drank 240W+, consuming more power than Titan Black cards, but since that isn't stock I can't count that (despite it being more stable). So, while GP104 has a higher TDP target than GM204, it is much more often limited by power target. So the difference is greater than just 15W in the real world, so to speak.
On the comparison between the 980Ti and the 1080, that can't be compared properly. That's GM200 vs GP104. GP100 (or GP102, since they probably don't want to sell consumers double-precision capable cards because $$) will be the better comparison for 980Ti/Titan X. Ironically, forcing voltage to a constant for GM200 I've seen reports of less power consumption than with stock vBIOSes; rather the opposite from GM204. Weird, that.
Well, some of it is. There's a reason that in benchmarks there's often a 61% boost and in some games (namely gameworks-using ones) the difference is closer to 70% or even 80% I've seen reported in The Witcher 3 once. It's the same reason why Maxwell is much better in said gaming scenarios than Kepler is. Tessellation engine. Gameworks, as more people started using it in games, I've noticed uses two technologies: Tessellation and PhysX. Almost every single bit of technology that nVidia has shown for Gameworks since Maxwell was announced has been simple tessellation. Fallout 4 God Rays? Tessellated light beams. Witcher 3 hairworks? Tessellation at extreme (64x) levels. Maxwell's tessellation engine was much stronger than Kepler's, and by and large blows everything AMD had out of the water in all their cards to date. Pascal seems to be much better than that even, so even though the raw number-crunching ability of the cards may have a 60% difference, because it tessellates easier, it runs even faster than it normally would. This is why Gameworks is considered a "sabotage" of all cards that aren't their current-gen line, and I honestly agree to some degree. There's other ways (like TressFX for hair; OpenCL based... which is open source) to do things. But nVidia wants to monetize everything, and if they can't they rather just remove the feature. Their current treatment of SLI where they couldn't sell NVLink to consumers and refused to use XDMA for their card designs despite having enough time to design Pascal to use it is more proof of that. Discard 3-way and 4-way SLI instead of trying to fix the root cause of the problems (not enough bandwidth between GPUs) like Frame Pacing or games needing previous-frame frame-buffer data.
As Pascal populates the market more, you're going to see games using gameworks come out where the 1080 will suddenly be 100% faster than the 980 instead of the 61% it should be, because they'll just raise all the tessellation in Gameworks. They did it before, and while this is still speculation, WILL do it again if they can get away with it. This is why GTX 960s performed like 780s in Project CARS, despite being significantly weaker cards... pure ability to handle the innate tessellation (and possibly PhysX) better, since the whole game is designed around PhysX and tessellation.
Unfortunately, however, this is less "sabotaging old cards" (as in, not reducing performance), but more of a "forced obsolescence". It's still underhanded, but as I said earlier in this thread (somewhere), I've NEVER noticed significantly worse (if at all) performance in any game I've ever played after a driver update, and for the most part, excepting Dark Souls 3 and Doom 2016 (which they acknowledged and fixed in a driver shortly after acknowledging) the same seemed to be true of Fermi cards as well. It's just that by forcing obsolescence, they're going to make people feel like updating as soon as possible is a very good idea, rather than people who'd rather wait and keep cards for a few gens before making a bigger, more sensible update than tossing a few hundred every year for 30% more FPS.
This is because the developers designed the game when PC tech was much stronger than the time when TR 2013 came out. It's unoptimization creep, and it happens all the time. Essentially, people pick a point where they figure everyone has <X power> as a low end point for their cards, and design for stronger cards. It does not matter how weak consoles are. It does not matter that going by Xbox 1, everybody with a GTX 570 or better should get equal or better performance to Xbox 1 and PS4 titles. Those cards are too old, and thus they don't care about the level of power they have. It's why games that are on Xbox 1 at "1080p 30fps" require much stronger hardware to get the same performance on a PC. Like, GTX 660 min spec, for 720p 30fps medium-low graphics, but Xbox 1 which is about 67% as strong as a GTX 660 will run 1080p 30fps "high". It's stupid, but this isn't directly nVidia nor AMD's faults. This is lazy developers and lack of desire to optimize. PC is usually always an afterthought for these people. A month or so before being ready to wrap up a game's DEVELOPMENT itself, they usually start (yes, START) working on the PC port. There's no time, no budget, no manpower. Nobody cares. And they get the sales anyway; it doesn't matter how much bad press they have if they get their sales, and there'll always be those people who'll defend 10 hour single-player $60 AAA games that run like crap with mediocre graphics hidden by good lighting effects and high texture resolutions so things look sharper (though not better). It is what it is.
Another example is Sleeping Dogs (2012; developed around GTX 580 days) versus Sleeping Dogs: Definitive Edition (2014, developed around Maxwell's launch). SD
E had a couple improved effects, but mostly looked the same otherwise. Performance was cut easily in half, or more, over SD. Why? It was developed when stronger hardware was available. They could have simply taken the PC version from before, updated it with new tech and better controls, port it to consoles, and be done with it. But they refused to do so, and as a result optimization tanked hard, because they apparently re-worked the PC optimization.
The Gears of War collection that's on UWP and X1 is another example. X1 runs 1080p 60fps "high" of the game, which is essentially a ported and touched-up version of the original PC version from 2006. Xbox 1, being somewhere around maybe a 560Ti to 470 in performance, understandably is able to get 1080/60 out of it. My 280M used to get 30-50fps maxed at 1920 x 1200 back in the day. What does PC run like? GTX 970s not able to hold 60fps. For a reskin of a 2006 title. Optimization? What's that?
For this, and this alone, don't bother blaming nVidia or AMD.
It's an extremely crappy and underhanded business model to attempt, but people buy it, so they'll keep doing it. I can't count it as gimping any cards, though, despite how much I hate it.tribaljet, Kaze No Tamashii, Shadow God and 7 others like this. -
Ionising_Radiation ?v = ve*ln(m0/m1)
Great points throughout. The Gx100 chips were meant for the 70 and 80 series, now Nvidia relegated those to the Ti and Titan GPUs and jacked up the prices twofold. The Gx104 was supposed to be the mid-range GPU, but now it's high-end, and many of the mid-range and lower-end GPUs are all re-brands.
Sent from my HTC One_M8 using Tapatalk -
I think I understand the problem now. I use the hardware as a means to get to the thing I really want to do which is play the games. I don't really care at a lower level why the 1080 is so much better at it than the 980 or why it uses less power. It just plain runs them better. I don't care that I'm using an 870M as long as my gaming experience is still good, which it is. The games are more important. Having 5 less lbs on my lap when I'm chilling on my recliner with a game is more important. I care about the games more than the hardware itself. I don't care what my 3dm11 score is as long as Doom runs at an acceptable framerate (which is completely just a personal decision). Obviously decisions nvidia and AMD are making have influence on how my games look and play, but since I'm still happy with my 870M an most people complaining about Pascal would probably never be, I just care about different things, and that's OK. I'll bow out of this thread as my perspective is different.
Sent from a 128th Legion Stormtrooper 6P -
If you're happy with what you've got, more power to you. Considering I only get 1 GPU half the time in games, you and I are in the same boat with respect to performance =D
-
still remember how crysis 2 tessellates the water under the level? Yeah.... AMD always sucked at tessllation.
Yeah, I am gonna test dual GTX 1080 SLI, not gonna actually use it. I doubt it is impressive. -
That's part of the plan, and to do that I don't need vaporware.
I do understand the laziness, it's just that all GameSux titles have pretty similar behaviour among GPUs (not just AMD ones), as you already stated. Also my favorite example and my biggest gripe - pCARS. Let's not forget that unlike the other titles this one was a community driven one - the game ran perfectly fine no matter the hardware, as long as it was covering the requirements that is. Then shortly before the actual release, they got a flat tire. -
GTX-1080 FE usually has a maximum of 120%.
FWIW I measured current drawn on the 12V rail with 100% and 120% limits being hit and it worked out to 150W and 180W. Unfortunately nothing to compare with so don't know if that's the norm or not but I originally presumed the 180W was for 100% limit.
Of course these limits can be bypassed one way or another so maybe it's a mutish point anyways. -
Not related to pascal.
Not bad for a 35W GPU.
Around the same score with the GTX 860M on Fire Strike.Georgel likes this. -
Yeah 120% sounds about right.
If the card is 180W rated TDP and 120% PT maxes it out at 180W then that's extremely odd... probably a crappy broken vBIOS as per normal.
You're right about that, though. If people want to bypass the limits there are ways. Though it gives enough of a reference for what mobile can handle, which while still able to bypass power limits, values lower power draw greatly. -
What is the double precision that is inactive in 1080 and consumer level cards? I mean for what would I use it, if I would need it?
Pascal: What do we know? Discussion, Latest News & Updates: 1000M Series GPU's
Discussion in 'Gaming (Software and Graphics Cards)' started by J.Dre, Oct 11, 2014.