Actually it appears to be even better by about 20% - and the 860M is a 75 W GPU. I'm liking it.
-
Ionising_Radiation ?v = ve*ln(m0/m1)
-
Is there any indication of when these pascal GPU s will be release?
I don’t know if I should wait or buy a new laptop now, I will be starting school again next month and my current laptop is still a (wait for it, lol) core 2, 670 go?
Can’t even play any of my x265 vids without major issues.
Should I try wait for these pascal cards or kaby lake (maybe that’s too far off?)
I play a lot of hevc so I was thinking maybe drop down to 965m (maybe I’m wrong but I think that’s the one that has a dedicated hardware decoder). But I think the 970m should? Handle it?
Does the pascal have any hevc hardware decoding?
I would love a laptop with a real gtx 980 in it but it’s a bit out of my budget so I may be forced to go with a 970m based (clevo) with a 6700hq.
also if anyone knows, I still own a I7 920 OCd to 4.0G with a gtx 770 and a few x265 vids will stutter when using 10bit, so I'm hoping to get a bit better performance then my desktop or at least the same as I can turn off 10 bit and pretty much everything plays and games are OK too.
btw nice post D2 Ultima, I had always wondered about that sort of thing but you explained it perfectly -
Xbox one going for 6TF's and buzzwords "Uncompressed pixels", how does it matter for 4K really ?
The FP64 performance doesn't matter for gamers, It's more useful for simulation and rendering since the double precision will matter a lot in that scenarios and FP32 is only useful for gaming while FP16 is more for deep learning.
Change of FP64 performance started after the GK110 being having a big die and sporting FP64 perf and using it on all spectrum (GeForce, Quadro, Tesla)
In the search for absolute performance per transistor, Nvidia revised the way how their Streaming Multiprocessor works. When we compare GM200 versus GP100 in clock-per-clock, Pascal (slightly) lags behind Maxwell. This change to a more granulated architecture was done in order to deliver higher clocks and more performance. Splitting the single Maxwell SM into two, doubling the amount of shared memory, warps and registers enabled the FP32 and FP64 cores to operate with yet unseen efficiency. For GP104, Nvidia disabled/removed the FP64 units – reducing the double-precision compute performance to a meaningless number, just like its predecessors.
src
For GM200 NVIDIA’s path of choice has been to divorce graphics from high performance FP64 compute. Big Kepler was a graphics powerhouse in its own right, but it also spent quite a bit of die area on FP64 CUDA cores and some other compute-centric functionality. This allowed NVIDIA to use a single GPU across the entire spectrum – GeForce, Quadro, and Tesla – but it also meant that GK110 was a bit jack-of-all-trades.
src
More on FP64Last edited: Jun 15, 2016 -
Kade Storm The Devil's Advocate
Your post merits more credit than what I can give, but I just wanted to clarify one thing as I see it stated a few times.
Gears of War Ultimate Edition most certainly does *not* run at 60 FPS with 1080p output on the Xbox One. In fact, from what I've seen, the visuals look more in the low-middle tier setting (although not too far off from medium-high, with high textures), and the frame rate locked at 30 FPS does stagger with the occasional dip. Digital Foundry did a few analyses on this front.
Having that said, despite the massive hardware disparity, the performance on the PC side with far better hardware is quite appalling. I had successfully maximised the original port on the PC and sustained 60 FPS at 1080p with no drops whatosoever on everything from a GTX 8800m to a GTX 280M -- it was a bit temperamental with performance hitches initially, but once these were addressed, the title could maintain very good performance across the board.Last edited: Jun 15, 2016 -
Single player runs at 1080/30. Multiplayer runs at 1080/60. I do not count the SP portion of the game when comparing; I apologize for not making it clear.Kade Storm and Georgel like this.
-
WOW, 150W and 180W and the cooler is 1:1 with the Titan X (250W) and both GPUs run within same temperatures, close noise levels, so I'd guess same or close fan RPM. Did nGREEDIA invented a way to put out more heat than the power it burns. It would make a revolution in the heating industry
Joking aside, if a vapor chamber can't extract the heat fast enough, nothing can. Of course I'm talking about a setup like this, there's always big-ass aftermarket coolers with 3 to 5 fans, or water, or LN2. The fact remains, with the same cooler as Titan X it gets to the same temps and close noise/RPM levels. Someone care to explain?
-
I wanted to get an EVGA SC 1070 for friend, just an hr before I got notified, was busy and now BAM all gone WTH.
-
Yeah, when you see them in stock, you need to act fast
They are hard to find in stock, it's a fleeting moment.
You could use a tool like nowinstock to get alerted to in stock situations:
https://www.nowinstock.net/computers/videocards/nvidia/gtx1070/
Looking at the nowinstock 1070 history of "in-stock" and "out-of-stock", the in-stock condition lasts for only one 5 minute polling interval. As soon as everyone gets their "in-stock" alert, only the first few responders - likely in the first minute - get served.
Rather than lose time using email/text/etc alerts, you can leave the page up end enable the Alarm, you can Test it, and it refreshes every minute - too bad it doesn't have an auto-order option
NVIDIA GTX 1070 Tracker
In Stock Alarm: Tracker auto-updates every minute. Customize alarms w/ Browser Alerts.Last edited: Jun 15, 2016Ashtrix likes this. -
This not knowing anything is annoying. I get my 1st laptop that's ahead of consoles which had a 970m. Now ps4 neo is using Polaris 10 for catch up. And that exceeds the 970m. Then I "guess" the Xbox one Scorpio will be using a vega chip
I can't be having a high end laptop on par with my pascal laptop.
I only guess it's vega because the ps4 neo has 4TFLOPS. the Xbox one Scorpio has 6 TFLOPS. The 970m and 980m had 2 and 3 TFLOPS and they are powerful.
The gap between console peasants and the pccmasterace is closing in just like nvidia desktop gpu vs mobile. Console owners will now have comeback replies and prob stronger console than a lot of of gamers. And they have better optimisation on games overal.
Sent from my iPhone using TapatalkCaerCadarn, MiSJAH and hmscott like this. -
I am actually very interested in what you've put forth here- and since I own some decent hardware, I figured we could give this a go!
Test PC - Note - 3770 and 6700 are not K version- stock clocks:
Intel i7-3770
16GB of DDR3 1600MHz RAM
128GB Toshiba SSD
EVGA 430w Bronze PSU
GTX 970 EVGA SSC
GTX 1070 FE Nvidia
Win 10 x64
EVGA Clock speeds (Aside from the EVGA Factory OC no OC applied) Driver 364.72:
GPU Clock 1190, Memory 1753, Boost 1342 ----- I saw spikes on GPU-Z v8.0 as high as 1404.4 on the sensors page, but 1342 was read on the Card page and EVGA's website. I did no overclocking.
Skydiver: 25,055
Graphics: 39,254
Physics: 9,531
Combined: 19,725 - FPS Combined 81.22
Fire Strike: 10,069
Graphics: 12,103
Physics: 9,812
Combined: 4535 - FPS Combined 21.10
Nvidia GTX 1070 - Driver 368.39
GPU Clock 1506, Memory 2002, Boost 1683 (Took it out of the box a few moments ago this way!)
Applying a negative clock of 200 to the Core (1306 mhz) we will see more close results- Unfortunately, the memory negative clock of 265 wouldn't actually apply- neither GPU-Z or nVidia Control Panel could see the downclock. I applied it and left it anyway, just in case, but I doubt it's functioning as intended.
Skydiver: 27901
Graphics: 51,447
Physics: 9,306
Combined: 19,440 - FPS Combined 80.00
Fire Strike: 12,220
Graphics: 15,768
Physics: 9,624
Combined: 5,353 - FPS Combined 24.90
Spikes on the core were has high as 1645 mhz (which is almost full speed on the boost) but memory *was* locked on GPU-z to 1765.8
Do what you will with this information!
Bonus: If anyone is interested, once my i7-6700 comes in tomorrow, we will be able to see just how much of a jump Skylake provides from Ivy Bridge as it's clear to me IB is holding it back.Dufus, Ionising_Radiation and hmscott like this. -
Ionising_Radiation ?v = ve*ln(m0/m1)
That's a 2000 point difference - one that could be achieved by sufficiently overclocking the GTX 970, which is renowned to do so quite well. So the clock rate does play a big difference. -
HaloGod2012 Notebook Virtuoso
No they aren't catching up, the 1080 is 9.X teraflops and the new Xbox isn't even out till holiday 2017. By then then we will have the 1080ti and the new Titan which will be way over 10 teraflops. They won't ever catch up, or even come close. The new Xbox is 1.5 years away! To add, we aren't even sure if the 6TF figure (which is meaningless for gaming performance anyways) is the soc combined or just the GPU , which would put PC even more ahead today than it already isLast edited: Jun 16, 2016Georgel, Mr Najsman, bsch3r and 2 others like this. -
I wish they release an MXM based off 1080 GTX replacing the 980M else we will be getting a 1070 (6.5Tfs) based off card which will be max 6TFs on notebook with cut-down GP104, then XB1 S will surely catch the mobile spectrum but yeah that 6TF's figure which is an SoC and that AMD's poor tessellation doesn't tell the whole picture even if we speculate, but if Nvidia releases a refreshed model by 2017 GP204, then consoles will get REKT. Could be couple of months later but this roadmap is entirely dependent on VEGA and If that's really VEGA which is 6TFs in scorpio then AMD is doomed, Period.
I just want an MXM which can surpass those console gimpware.Last edited: Jun 16, 2016 -
TF is not an accurate indicator of actual gaming performance. Also I am curious if the 6 TF was for the SOC or for the GPU?Ashtrix, Georgel and Kade Storm like this. -
Also remember AMD hardware has more TF than Nvidia hardware. But it's a big jump considering X1 is only 1.8TF or something. Still I doubt they'll be playing 4k at more than 30fps if that.
Ashtrix, Georgel and Kade Storm like this. -
Scorpio won't be anything big. Zen + Polaris. There's no way they'd be able to make a Zen + Vega APU and manage to cool the thing in a SFF chassis. That's not even taking into account how much Vega will be. Nobody is going to pay a grand for a console, especially with how much this generation flopped, all three already being replaced.
-
maxwell async fail
pascal is bandaid for maxwell
amd, well, they're optimized. but overall don't have the horsepower for single gpu performance -
Here, let me help you with that. I picked the top i7-4770 and GTX980 FS score to compare with i7-4700MQ and GTX1080 set for the same clocks, well as close as could be. Note that clocks were rock solid and did not change for the GTX1080. Also note GDDR5X is technically QDR not DDR so half the clock frequency is reported vs GDDR5.
http://www.3dmark.com/compare/fs/8822259/fs/6914310#
Power consumption for GTX1080 for this was 90W or less. If one is familiar with OC'ing then it would be more interesting to compare power levels at a 2GHz clock. The 980 would need LN2 for this so good luck finding someone.
Just imagine if Intels next CPU came with a 50% increase in clocks. Would people still complain because clock for clock it's about the same?jaybee83, ryzeki, Mr Najsman and 1 other person like this. -
Kade Storm The Devil's Advocate
It certainly has had more FLOP output in many hardware instances, although this has also been a back and forth between Nvidia and AMD. Having that said, back in the days of early DX11 mobile hardware, when the Mobility 5870 Radeon was the best we had on offer, despite well over double the TFLOP output, it was nowhere near double the performance of the Nvidia competition at the time.
Pretty much this . . . The 880M lags around a solid 30% behind the 980M, yet in terms of raw TFLOP output, they stand very close (2.9 vs 3.1). Even more telling is the difference between the GTX 780 and the GTX 970 -- in terms of actual in-game and benchmark output, the odds are firmly in favour of the GTX 970 with a 3.4 TFLOP output in comparison to the 3.9 TFLOP output on the GTX 780.
Back in 2011, we had a discussion on this topic in the Alienware forum, and the sentiment was rather similar: TFLOP output doesn't tell us a whole lot about the end results. Consoles have been marketed under TFLOP output as a gimmick to boast their standing, but in real terms, this also meant very little. Hell, the PS3 served as one distinct example with its magical 1.8 TFLOP Reality Synthesizer GPU (which it most certainly wasn't; not even by a minor fraction) remaining one of the most laughable examples of this tactic during that generation. It would appear that console marketing is simply using a slightly sober version of the same tactic this generation.Last edited: Jun 16, 2016Ashtrix, Georgel, CaerCadarn and 1 other person like this. -
Yes, but having more TFlops really helps a lot when doing other type of workload than gaming. It's not supposed to be that relevant for gaming alone.
For gaming, number of cores, core clock speed and the architecture of mapping units is much more relevant than floating point operations per second, as for gaming loads, you generally don't do many floating point operations when gaming, but you really do many texture mapping operations, and workload at graphical component level, depending on if using OpenGL or not.
When I asked about double point performance, I am most curious on what cards it is enabled exactly ? I mean, only Titans, or ti versions have it enabled too? -
Yes it's true that it won't match top end. But the point is you can't use a mid end gpu to match or beat it now. It takes the higher end of a single card to be beyond it. Those that want to get a vega gpu or nvidia equal will only feel like they've just equalled a console to stay in the game. Whilst there are enthusiasts here that will only look at top end. There are those of us that will be happy with just a 1060-1070 type of performance. I was just generalising how close the gap would be at that point. But my post was not referring to matching desktop in the ti and Titan and SLI section.
Lol felt like I was gunned down then if not a lil misunderstood
Sent from my iPhone using TapatalkKade Storm and hmscott like this. -
not forgetting the consoles are going to be a lot more powerful than their pc equivalent due to low level access to the hardware.
sisqo_uk likes this. -
All this talk about power is getting me excited
-
Yes ditto. I'm just excited to know what the 1070m. I hope it's stronger than Xbox Scorpio to come. Although I'm a guilty year upgrader so I should be good when voltas here. Extend the gap back between the consoles and pc.
But fair play to them. I thought they was out of date when I realised they couldn't even run most games 1080p. I thought that would of definitely been one of the most thought out upgrades of the next gen consoles before it was released.
Sent from my iPhone using Tapatalk -
It's more so the other way around now. It's the lack of care with PC ports.Kade Storm likes this.
-
Kade Storm The Devil's Advocate
Do define what you mean when you say 'a lot more'. Because I find people use that term quite liberally and it means different things to different people. One thing I will say -- and this isn't necessarily directed at you -- coding to metal isn't magic, it helps to a degree but the hardware limit is still a hardware limit. There was little if anything by way of multiplatform software during the last generation of consoles that performed more than 10-20% better than equivalent PC hardware. -
That's to be expected from a die shrink, no? IPC improvements are (mostly) related to the architecture, where the pure clocks (again mostly) to the process. The thing about power consumption and heat-output worries me though. If your measurements are right, I don't see how current laptop cooling solutions would be able to handle 14nm 100W GPU.
-
Very interesting test! So at the same clocks, they perform basically identical, just that 1080 has more cores/TMUs. In general it consumes about half of the 980 too. There's the 2x perf/watt.
-
So this shows that at same clocks, the performance is the same. It's just that it does it with about half the power consumption. So performance isn't any better, just it's more power efficient.
But where in the hell do you get a machine with an i7-4700MQ that can support a GTX 1080 desktop card?Georgel likes this. -
Overclocking GTX 1080 FE atm. Will have benchs soon. Slightly disappointing.
-
What's your ultra score? Regular fs isn't going to show the improvement that ultra will.
-
I dont have ultra.
-
Extreme score then?
-
I thought extreme was in the free version. I bought advanced back in 2014 though.
-
There's something about the physics test in 3dmark11 that makes me cheer in on. It's still my favourite benchmark because it puts both cpu and gpu under stress together in test 1 which I think is a useful gauge for system stability and prowess.
Firestrike is nice since it basically isolates the GPU and it's quicker. But it doesn't really tell the whole story. I can run overclocks in there that won't pass 3dmark11. -
No extreme unless paid version
-
LoL PASCWELL .... Wonder where all that money they said they could go to mars with actually went... Quick to the observatory to see if Huang is on mars...Ashtrix, Ionising_Radiation and jaybee83 like this.
-
Then. Even if it were only clock improvements, like Pascal could clock more, as long as we feel dat improvement, we're all good
I'm still waiting to see what will be going on with Pascal mobile. As long as it takes over actual GTX980, with at least 20%, we're all good. -
Review 1080s with sneaky OC:
http://www.techpowerup.com/223440/m...-samples-with-higher-clocks-than-retail-cards -
While I think that is shady business. I got the "review" bios and flashed it to my MSI Gaming X perfectly. Free higher standard clocks. No need to select "OC Mode" in some bs software.hmscott, Ashtrix, Mr Najsman and 1 other person like this.
-
-
Pulled the latest from their site, there´s a newer one? I´d love to have the correct GPU displayed instead if it´s even in their database.hmscott likes this.
-
Dang! IS that a test/pre production unit ? Also is this MXM or soldered?Last edited: Jun 20, 2016
Pascal: What do we know? Discussion, Latest News & Updates: 1000M Series GPU's
Discussion in 'Gaming (Software and Graphics Cards)' started by J.Dre, Oct 11, 2014.