This is all I'm seeing too. Again, wrong as I may be, I apologize for my ignorance in that regard, I still see some significant headroom where that i7 can be further taxed. However, that's a completely different argument in itself so I'll leave that one alone.
EDIT:
Though may I ask if the current gen i7s can keep up with a single 980M?If not, then that's some BS. :S
-
-
To really see the difference CPU performance needs to be changed physically, like a OC/UC or run the same test with Clevo P570 on same GPU clocks.Last edited: May 26, 2015 -
For example, GTA V is an exceedingly heavy game on the CPU. If however you turn on MSAA and max everything out, you'll end up forcing a GPU bottleneck for most of the game. Since I don't use MSAA in GTA V (to keep higher framerates) as well as I keep grass on "high" instead of "higher" or "ultra", I get a whole lot of excess GPU power. My CPU's bottleneck then steps in in some parts of the game, where the game simply will not use more CPU power (the max is 80% via Task Manager, or 66% via my overlay/throttlestop's readouts... GTA V doesn't really benefit from Hyperthreading). I have discovered by talking with some friends however that a ~4.4GHz intel haswell quadcore is enough to almost never go below 60fps in GTA V if you remove the GPU bottleneck. But most of the laptops coming with the 4720HQ 3.4GHz CPUs? Yes, yes it can. It'll be in a rather small number of games, but it can indeed bottleneck a 980M.
If you'd asked me exactly five months ago if a 4720HQ could bottleneck a 980M for 1080p 60fps gaming however, I'd have said "no" and that'd have been the truth. But remember how last year was the "WE WANT VRAM! NOM NOM NOM" year of video games (Titanfall, Watch Dogs, Shadows of Mordor, CoD: AW, etc)? Well this year is the "WE WANT CPU POWER! NOM NOM NOM" of video games (Dying Light, GTA V, more-to-come). -
Of course this is why I'm strongly considering the P750ZM refresh so there is the desktop CPU which can easily achieve over 4GHz and possibly SLI Maxwell refresh #2. -
Buuut I'm very interested to see where they're going. If they make another 120Hz panel-using notebook (far less a 3D model) with heaven forbid a new 120Hz 17" panel? I'd probably oogle it and die XD.TBoneSan likes this. -
Ha. A 500W laptop power brick, lol. But it may be necessary. Although the more I think about it, I may go the desktop route myself and keep my P650SE for an indeterminable amount of time. I don't need to be as mobile as I used to be. A desktop in my main home office would be sufficient, and something like my P650SE gives me the flexibility to game on the go. Still up in the air though. As 120Hz and 4k gaming gain popularity, desktop to me seems the best route for the time being until mobile can catch up on that race. Otherwise sticking with 1080p/60 seems the best option for laptop gaming for the near future.
-
Given enough buffering it should just throttle, not shut down. Shutting down due to overloading PSU is a design flaw even if the PSU itself is guilty.
As for carrying power bricks, Clevo's habit of using relatively cheap large bricks really doesn't help here. Maybe reuse slim Dell bricks again? -
120Hz is gaining no popularity. By and large the majority still only cares about 60Hz. People have been using 120Hz and 144Hz back in the day with CRT monitors and such. It fell off for a while when LCDs became the new thing, but it has always existed in one form or another, and its popularity I'm sure has remained fairly constant. I don't understand WHY it isn't becoming more popular, but I suppose it's so easy to just say "yeah I can play this with an i5 at 60fps" etc. Single threaded bottlenecks become VERY apparent above 60fps (looking at you mass effect 3 and Payday 2) and some games simply... hate... being above 60fps. Mass Effect is one series that has collision detection issues (namely shepard and teammates walking near to objects such as counters or tables = them suddenly standing on top of said object) or like the often-mentioned BF4/BF:H where 120fps constant usually requires a pretty fast CPU. Then you have games like anything-made-by-bohemia-interactive where you'll never see 120fps, far less 60fps sometimes. Then some games have no problems hitting 60fps but seem to be incapable of maintaining above 60fps, like the last two call of duty games. If I locked them to 60? I'd never go below 60. Leave them unlocked to their 91fps default lock? Enjoy your fluctuating framerate no matter what PC you've got! Ironically AW's single player basically sat over 100fps 24/7 for me, which meant the MP needed some serious optimization work =D. Or Skyrim which requires being locked to 60fps lest the physics engine breaks and the day/night cycle goes out of sync, with NPCs thinking daytime is nighttime, causing quests to be impossible to start and/or complete because NPCs are not there at the prerequisite time, but speak to them there at other times and they'll say "come back at <right time>" etc.
So yeah. While 120Hz is a fantastic thing without question, it definitely isn't for the basic user. It also needs a good bit more CPU power. I'd consider it as much a thing for advanced users as I consider getting the most out of a SLI machine. Basically, it's a "headache" most gaming machine owners rather not bother with. It has benefits, but sometimes you gotta put in some elbow grease. It's not like tossing in a 980 and an i5 and 8GB of RAM and playing on a single monitor with vSync on 24/7 using GeForce experience 1-click optimization like pretty much every youtube celeb with a half-decent PC has done XD. -
About CPU load reading:
A game could be single thread bottlenecked. If only one thread is busy we could see a 13% (4 core HT on) or 25% (4 core HT off) AVERAGE load but the game is already bottlenecked. So HT on or off the total CPU load value would not give much useful information.
To give a silly extreme example, running an old game on a four socket workstation might give you 0% average CPU load. But the main thread is still capped (if it's heavy enough).
A modern multi-threaded game would not be that extreme, but there might be some threads heavier than the others.
Thanks to the slow single thread high core count consoles this should be alleviated as devs get more used to them. -
-
In some cases, yes.
But a logical thread can jump from one physical core to another frequently, depending on CPU affinity settings (which we have no control of on a per-thread level). The game code can even generate and destroy heavy threads on the fly.
100% load on a thread doesn't mean CPU bottleneck neither. Some (badly programmed) old games loop forever with zero idle time. You can OC an i7 to 10GHz and it would still cause 100% load. But most of the loops are simply wasted. There's no actual bottleneck.
Without understanding of the game's source code there's no reliable way to analyze this from a single CPU load data dump. We need to change CPU performance physically or somehow limit load level evenly on every core to simulate slower hardware, and observe how the games responses to the change.Last edited: May 26, 2015 -
Spawned threads inherit the parent's processor pinning status.
-
-
-
-
Was not aware of that. Thanks.
This should be make it much easier to analyze -
-
-
DX12 is not an end-all solution which will bring improvements to every problem that exists currently. DX12 will do ONE thing for sure: reduce CPU load and allow for increased draw calls. It MAY do another thing: allow SLI/CFX to add vRAM instead of mirror the contents. It MAY do a third thing: allow AMD + nVidia (or AMD/nVidia + Intel) cards in one system to work together rendering games for extra performance.
The "MAY" in each situation relies on many variables, such as drivers that allow multiple GPU vendor cards to be in a system and active, and if the limitation of the SLI bridges or the PCI/e speeds cause problems with adding the vRAM together, and if one GPU won't hold back the other (such as the iGPU in an intel CPU holding back a powerful dGPU due to its tiny and slow memory, etc). There's no real guarantees even if the framework is there, and even reducing CPU load doesn't mean the public is going to jump on things like 120Hz if they would not have before. AND devs don't even need to code in DX12. They can very very well code in DX11 or even DX9 (as many games still do) and completely ignore DX12 as a whole if they don't feel it's worth learning how to code for the API. -
Or they feel that many customers would still be on DX11 hardware.
-
if anything high refresh panels will probably start to come back with VR getting pushed into the market -
if you think about, if NVIDIA can say with confidence that even fermi is getting support, will the majority be using anything more than 4 years old by the time windows 10 comes out? -
Radeon HD 5000 and 6000 Series are DX11 but won't be getting DX12 support
-
120hz only ever came to mobile for 3D. Since that was an abysmal failure, it makes sense that they are gone. I think Clevo originally had plans for a 3D SM-A and scrapped them which is why I got the screen in my machine. 60FPS has pretty much always been the target for games... At least until the consoles made us okay with 30......
-
As for this, the whole reason for 120Hz laptops before was 3D vision's craze. Since VR has its own panels and simply requires GPUs to be strong enough, high refresh panels do nothing for it.
All that being said, DX12 would indeed *HELP* 120fps... but it's still a niche many devs refuse to bother coding for, and it does not make a single difference with already-existing titles. It only helps future titles for devs who want to bother with DX12, because as we all know, PC gaming gets BEST* optimization**.
* = no
** = devs do not know the meaning of this word -
That's my point though. Their intended use, 3D, never took off. Why push panels that have 120hz screens that no stock laptop can use to its fullest potential? It's an added cost for the manufacturer and consumer. I find it worth it but I'd be lying if I said that I get 120 FPS out of it. Most modern 2014/2015 releases average 80 which I could have just gotten a stock panel with better color (this panel really messes up yellows) and overclocked, saving 150 (I think?) bucks in the process.
-
That being said, you have a 72% NTSC gamut screen, so if yours messes up yellows that's probably just a single bad panel. I'll PM you an ICC profile to try out for it. It makes it FAR less bright and tones down the blues a lot, but I've grown accustomed to it and I like it now more than the default.
Also, $150 was the cost of the regular screen upgrades from 60% to 72%, so you wouldn't have saved any money. That bit was a moot point. -
Do you have the same panel I do? Prema has a profile posted on TI I just haven't tried it. Yellows only look right if viewed perfectly head on otherwise they fade into white if its on a white background. -
The 90% model was an AUO panel and it was glossy. If the stock for the -S model had 72% and then it was +$150 for the 120Hz, then that was a bit of a rip off. These machines' standard panels were 60% gamut. I know that for sure. If you had a 72% panel as default they should not have charged extra for 120Hz. I have been looking at regular and 3D laptops and the 120Hz panels since the P170HM3 came out because I wanted 3D so bad for years. I knew all the prices down in my head. -
Any chance the GTX 990m will come with a 15" laptop?
17 inchers are way too impractical to carry to class. -
Yes, there still is a chance. We'll know when it's out.
-
-
I'm expecting the 1080M to fit snugly inside all existing models that currently house 980Ms. I just think that the models currently with mere 180W PSUs, such as my P651SG, will need to be sold with 240W PSUs this time around.
Sent from my Nexus 5 using Tapatalk -
The next lineup should come with the Skylake CPUs right?
-
-
I can't see how they'd manage to get a 125W(ish) 1080M to live within a 180W limit. And P65xSx series is dead to me if it doesn't come with the next x80M GPU.
Sent from my Nexus 5 using Tapatalk -
. Just look at what Dell did with the new Crippled Aw models and what MSI did previously.
-
230W brick wave of future #Clevo #PleaseNoBeDumbYouAreOnARoll
TomJGX likes this. -
mason2smart Notebook Virtuoso
Msi gt80
-
MichaelKnight4Christ Notebook Evangelist
I have a feeling they will increase the vram on the soldiered chips. At least I think they should for the next high end refresh.
-
i'm in the market for a new gaming laptop and was wondering when the new generation of Nvidia GPU's are releasing. is the top GPU gonna be called the 1080M? what's a 990M?
i don't want to make the same mistake twice because the 580M came out about a month or 2 after i bought my 480M-equipped laptop. :smh: -
Mr Najsman likes this.
-
dang. . .
that's a long wait. -
-
i already went through that with my NP8850 lol -
Besides, it'll take at least a month or two after the new cards come out for Prema and Johnksss and svl7 and the vBIOS modders to rip into it and determine whether it's worth it or not. The 880M so fresh in memory is one great example of a card that while on paper was a fantastic out-of-the-box upgrade, turned out to be a terrible disaster with only the lucky or the technical adepts making it work well for day to day usage.Kade Storm likes this. -
Not going to see Pascal launch anytime soon (ala next few months). "Possible" refresh of the Maxwell chips to go along with Skylake launch... if that is the case I image we should be seeing leaks and mentionings right around now. About the only thing you can really count on is Skylake announcing in August and launching in September.
-
-
I needs it, I does. -
i guess i'm just gonna wait til the new generation of Nvidia GPU's come out then because i'm not in any kind of rush. i'm definitely not making the same mistake again in upgrading too early.
nVidia 2015 mobile speculation thread
Discussion in 'Gaming (Software and Graphics Cards)' started by Cloudfire, May 9, 2015.