So how much time i have before it really starts to bottleneck any GPU ?
Talking about Games mostly, no synthetic benchmarks ( That test CPU performance ONLY )
I was thinking about upgrading to like AW 18, but when i see 3Dmark results i do not see a reason for it.
It would be like spending another £1000 to get a refreshment for nothing.
And with a worse screen.
Im not here to whine about anything, the opposite actually, i am proud, and happy, that i did not had to buy a new gaming laptop every year to keep the FPS high...( well maybe just changing the GPUs )
So for how long you think First Gen XM can keep up ?
More than 4 Years passed and all i see is this
920XM vs 4800MQ
10% in physics score in favor of the newer Gen,
Result
For how long i will remain bottle-neck Free - happy ?
-
tilleroftheearth Wisdom listens quietly...
You've been bottlenecked for years now and you don't even know it.
When crazy, insane O/C'd desktop platforms are the bottlenecks for GPU's - don't you think that even the i7-4800MQ will be a bottleneck, depending on the gpu and game combination? Let alone your technologically ancient platform?
You also ask for no synthetic benchmarks and then link to them.
You were bottlenecked when SNB was released, btw. And that is ancient now too. -
fatboyslimerr Alienware M15x Fanatic
What overclock do you run your 920XM at? I game at 23x multiplier (3Ghz) but my 940xm is happy at x25 or even x26 across 4 cores with my cooling mod (see link in sig) keeping everything nice and cool.
I think you will always get games that are very CPU dependent like FC3 and Crysis 3, where we may start to feel the age of our very old CPUs but the vast majority of games will run perfectly on this old tech.
My modest overclock 3d Mark 11 score also makes me think what is the point of upgrading. I'm thinking more of a steam machine than another laptop.
Interesting discussion...
Mantle may help the cause even more. -
tilleroftheearth Wisdom listens quietly...
But the processors you've 'heard' about not making a difference belong to the same era; no wonder they're similar.
Look at any modern test platform setup for gaming; it is not running even 1 year old tech. CPU's do make a difference to how much performance you can get from any and every other component in a system.
To think otherwise is just fooling yourself. -
They run the fastest overkill CPU's possible when benchmarking graphics cards to be on the safe side so that the CPU can have absolutely no influence on the results. Not only that but they are running multiples of the fastest desktop GPU's, which are themselves much more powerful than what you would find on a laptop. If one were to make a desktop computer to use those same top of the line graphics cards, you could use a much slower processor and still perform almost identically the overkill processor used by the reviewers.
An overclocked 920xm or 940xm is enough processor for today's gaming needs and will not bottleneck any laptop GPU's. Saying anything else is somewhere between hyperbole and hogwash.triturbo, radji, octiceps and 1 other person like this. -
best way to detect a CPU bottlenecking is in fact synthetics benchmark, since they run seperate graphics test and physics test, the way to detect a CPU bottleneck is when you see the GPU score much lower than what it should've been, which tells you that the CPU cannot handle the workload given from the GPU (which i might say, the 920XM bottlenecks SLI GTX780M if i'm not mistaken, but for single 780M it is fine)
-
tilleroftheearth Wisdom listens quietly...
You're welcome to keep believing that. And please do keep changing the goal posts (OP is not talking about O/C'ing).
The reality is somewhere in between (for gaming); but it is a safe assumption that the reason 920xm/940xm's aren't being made anymore is because they have been superseded in all areas.
The old stuff was good (at the time). Today, saying stuff like what you're repeating sounds like so much '640kb is all the ram we'll ever need' all over again.
This is Today. 2014.
You're welcome to join. -
fatboyslimerr Alienware M15x Fanatic
But why bother when I can play all new games on high to very high settings at 1080p? Even AW 18 is still only 1080p!
Think this discussion is more about longevity of old tech for gaming rather than why new stuff is so great. Because quite frankly new stuff isn't that great unless you have a lot of $$$$$. I'd much prefer my M15x to a brand new AW 14!
Sent from my One S running 4.4.2 -
-
tilleroftheearth Wisdom listens quietly...
The new stuff is great. Period.
Why games don't take advantage of it is the issue (imo). That is what the thread should be about.
The game developers are being lazy and don't give gamers a new engine that leverages and pushes new hardware to it's limits and the issue instead is that new platforms aren't needed? Huh?
You may be glad to be able to play new games at only 1080p on old hardware; that doesn't mean the tech (new or old) is good or bad. It means you are being mostly shortchanged by the game developers, or can't upgrade to current level platforms (mostly shortchanged, I think).
Have they run out of ideas? (Don't think so).
Have they found a cash cow they won't let go until it kicks them to the other side of the field? (Yeah).
That cash cow being; slight alterations of what they were delivering 5+ years ago, with pretty new lipstick and nothing new under the hood (investment or otherwise).
You need to see this scenario for what it is, not for what you want it to be.
-
@OP
Your CPU is absolutely fine for modern games and stands up to the latest mobile Haswell quads quite well. 3.73 GHz is rockin' for the 920XM and at that speed you're essentially tied with the 4700MQ.
The clock-for-clock difference between Nehalem and Haswell is only about 20-30% at best and you're almost never gonna see an IPC improvement of that magnitude manifest itself in any significant way in games since they are primarily GPU-bound anyway unless you're purposely running low settings and resolutions to make them more CPU-bound.
Take the clock-for-clock comparison here between i7-975X @ 3.46 GHz and 2600K @ 3.5 GHz: AnandTech | Bench - CPU
Clock-for-clock, Sandy Bridge is 10-15% faster in the synthetics and 5-10% faster in the game tests.
And then looking at 2600K @ 3.5 GHz vs. 4770K @ 3.7 GHz: AnandTech | Bench - CPU
Taking into account the 0.2 GHz clock speed difference, we can extrapolate from the results the clock-for-clock difference between Sandy Bridge and Haswell to be 10-15%.
And that's how I got my 20-30% number for Nehalem vs. Haswell clock-for-clock.
Makes sense when you crunch the numbers in OP's 3DMark score. In Fire Strike Physics, If 4800MQ @ 3.5 GHz is 15% faster (9793 vs. 8504) than 920XM @ 3.73 GHz while clocked 6% lower, then the IPC difference comes out at ~23% after calculation, which is right in that 20-30% range. -
Charles P. Jefferies Lead Moderator Super Moderator
We've had to remove/edit a number of posts in this thread; you're better than this. Insults (especially personal) have no place here in the public forum.
-
tilleroftheearth Wisdom listens quietly...
octiceps,
I don't know how or why you keep going back to an O/C'd 920xm processor - nor why you keep comparing 'clock for clock'.
Has the OP mentioned this and I've missed it again? (I've read this thread for the tenth time now)?
Even if I believe your numbers as presented; 20% to 30% faster for the Haswell platform is still VERY significant and goes a long way towards removing the cpu from being the bottleneck. Which it is. Always.
A certain game may not show it, or a given resolution may mask it - but the fact remains that for the ultimate gaming performance (which I view as just another workflow), a desktop is always the platform of choice. Because not only are the gpu's more powerful; but so are the cpu's which drive them.
Mobile systems still do not hold a candle to desktop based systems in raw performance.
And the most important point? I'll repeat it because it is that important.
If 3 or 4 generation games plays effectively the same over 3 or 4 generations of hardware, that is the gaming engines and therefore the game developers that should be questioned. As in; what have they been doing for the last half dozen years?
Not come to the conclusion that hardware is not progressing anymore.
If Adobe CSx Suite made the same non-progress over the same time frame; they would be out of business now. -
Because his results are from a 920XM overclocked to 3.73 GHz. Did you not read the OP at all?
-
tilleroftheearth Wisdom listens quietly...
But as I said, it doesn't matter. OC'd or not, the difference is still significant enough and my points stand. -
fatboyslimerr Alienware M15x Fanatic
Flingin what temps do you get on your i7 920xm at 3.73ghz? Have you overvolted it? It must run very hot?
Sent from my One S running 4.4.2 -
I play a lot of games that are easily CPU limited. It may not be your case. Toward the end of games like Crusader Kings II, Rome Total War 2 and Civilization V my R9 290 sits largely idle waiting for the i5-4670K. I don't game on mobile, but I'd assume the problem would be worse.
-
And it's ironic that you conveniently ignored looking at OP's 3DMark scores, because not only do you lack context for the rest of his posts thus making yourself uninformed, but synthetic tests like 3DMark actually help your argument since they show a bigger improvement compared to real-world games.
Why are you even comparing games to production-level DCC software? Games are nowhere near as parallelized in terms of CPU usage and never will be, plus high-level graphics API's will always be a limiting factor as well. And some CPU overhead is ideal to account for the unpredictability prominent in games, especially in large-scale online multiplayer shooters, which are typically the most CPU-bound games today.
Plus games need to be able to scale to a much greater variety of hardware and software configurations. The developers of professional DCC software know they serve a niche target audience which is almost guaranteed to have high-end workstation-grade hardware, but game developers can't make their games the next Crysis, otherwise the game gets a bad rap, it cannibalizes sales and increases piracy, and may even hurt the fortunes of the entire company.
And saying that CPU is always the bottleneck for performance is just plain wrong, unless you've got a nonsensical setup with a massively overpowered GPU paired with a massively underpowered CPU. But for sensible hardware configurations, GPU is nearly always the bottleneck, and scores of benchmarks all over the Internet will confirm that. And that is by design, as devs and gamers both know that it is much easier to scale performance up or down in a GPU-bound game than in a CPU-bound one. Excessively CPU-bound games have traditionally been the hallmark of poorly-optimized console ports. Just see Assassin's Creed III and perhaps the most infamous of them all, GTA IV, which still can't maintain 60 FPS maxed out on a GTX 680, a top-of-the-line GPU which came out a full four years after the game's original release.
And you are absolutely wrong about games not making progress over the last several years. It's not just about prettier graphics, which we've had in spades. What about new ideas, gameplay mechanics, business models, means of interaction, etc. Oh wait, you don't actually play games so you wouldn't know any of that.
And about games playing the same over several generations of CPU hardware, that's because as far as gaming performance is concerned, real meaningful progress has stalled on the Intel side since Sandy Bridge, and on the AMD side since Phenom II. And the fact that games are primarily GPU-bound.
And what have AAA devs primarily been doing over the last six years? The obvious answer is designing their engines around last-gen console hardware. Since we just entered a new console generation, the target development platform for most AAA games switches to much more powerful console hardware and PC hardware requirements will go up accordingly.
And when did the conversation shift over to laptops vs. desktop? Who's getting off-topic now?
So much of what you say here is just so wrong that it's hard to take you seriously beyond an oft-uninformed forum troll that likes to just jump into threads without the intention of actually contributing meaningfully to a discussion, just to bolster your already massive post count.Qing Dao and fatboyslimerr like this. -
-
alexhawker likes this.
-
-
tilleroftheearth Wisdom listens quietly...
octiceps,
I think you're out of your element and your level of understanding of how a computer works.
CPU+RAM=WORK done by a computer (notice that a gpu is not required). Notice especially that a gpu cannot run an O/S. Notice also that everything other than the CPU+RAM combo is there to be driven by the CPU+RAM combo. Not the other way around.
I also don't need to be a physicist to understand the challenges real scientists are dealing with when trying to understand the inter-relationships between objects. Just as I don't need to play games to understand how a gaming system works.
And that, is circular reasoning at it's finest. -
You guys are all funny, going back and forth. Now its back and forth on how many posts you have.
Hello, I have 34 posts and I work has a IT Engineer for a large School district in NV on both Macs and PCs. I will continue reading what people say and not post anything further.fatboyslimerr likes this. -
A holistic approach must be taken with computers. Everything works together to provide an output for the user. There are lots of stages in the pipeline and each can be a weak link. Also depending on what it is you are doing is hugely important in determining which part or parts of the system is the weakest link or bottleneck. You can just use your CPU and RAM approach and blindly say that the CPU is always the bottleneck and that more is better. Or you can use some common sense, look at what everyone is saying, check out a whole slew of benchmarks for yourself and clearly see that an overclocked Clarksfield processor is enough CPU for even some rather powerful desktop graphics cards.
You are like a rabid animal about your CPU and RAM idea, but even outside of gaming it is quite wrong. What about doing something that heavily taxes the storage subsystem? The CPU isn't doing a whole lot of work while it waits. What about something like Photoshop or any of the other myriad programs these days that use GPU computing? My dad's passion is photography and he uses a Core 2 Quad based desktop to do his work. I know you would say it is prehistoric, but it does work in Photoshop faster than any laptop on the planet after putting in a few powerful Radeon GPU's.
And stop talking about "newer" platforms as if they are somehow magically better. Their are plenty of older processors that are very powerful and plenty of newer processors that are not.
fatboyslimerr likes this. -
tilleroftheearth Wisdom listens quietly...
Both of your grasping for anything I say and then twisting it so that you can throw back at me is very sad to see. Obviously we can't have a grown up conversation about this with you two.
Your common, recurring theme is to simply insult. Insult how many times I've contributed (post count), insult my user title, insult anything, it seems, that you can't do here; like have a meaningful conversation.
Instead of providing information to prove your point or disprove mine. Or simply; even asking questions to clarify my points which I would have gladly done.
I come here to learn and to share what I know. You don't want to do either of that with the posts seen in this thread - even when others also indicate I may have a point, you discount their input with your circular reasoning and excuses about how it is the programming language for the gpu that is at fault, instead of conceding that I might (shock) possibly be right.
I wish both of you all the best in the future. But I am done talking to you unless your attitudes change.
I will continue to contribute and participate where and when I want.
I also hope the OP sees the point of my posts here.
The answers we get depends on the questions we ask; not on what we (think) we (already) know.
Everyone else;
In my posts here, I have tried to provide the backdrop of 'computing' for anyone wanting to know why a newer platform should be better for gaming (at least for certain games, even if it seems that most are 'gpu' dependent). And apologies for my contribution for the noise introduced with the immature interactions of mostly the posters below.
Sure, I can see and (already) know the reasons for not getting PC gaming engines up to speed with regards to pushing the hardware we have available to the max. That doesn't mean that that excuse holds water with me though. And it seems obvious to me that that is the issue; not that processors haven't made progress in the last half dozen years either (what a simple minded conclusion; especially after considering all the facts).
This series may be interesting with regards to the topic here:
See:
Debunking the Myths of Graphics Card Performance - Tom
Contrary to what has been expressed on my behalf; I'd rather be wrong (which means I've learned something new) than stubbornly continue with outdated ideas.
Can't wait for the next installment of the link above.
Take care.
-
This
Also thinking AMD Mantle to help us in the near future, most of the HiEnd gaming laptops GPUs can be changed without a problem, leaving some CPUs a bit behind sometimes.
Some games will become choppy even on most powerful desktop CPUs (4770K etc) because of amount of what is going on on the screen... -
Tiller, you are a joker.
If all you can do is try to force the discussion to revolve around your absurd beliefs about computing in general and make quips degrading those who disagree with you, how exactly do you expect to be treated in this forum? But of course, since your argument has absolutely no substance other than the goop between your ears, you get on your high horse and act like all those who disagree with you are the ones who are stubbornly holding on to their beliefs and are beyond being reasoned with. Replying to your posts is like talking to a brick wall. -
At some point, you will find a game your system simply cannot handle below your own tolerance (resolution, settings, frames per second, etc.). At that point, you can decide if you want to play that particular game enough to buy a new system. For a high-end gaming notebook, I'd expect it to last for around two processor & graphics generations before it can no longer play all games at maximum, and roughly four generations before it starts to struggle playing the latest games on medium settings or lower resolutions. In other words, you are likely to run into a game your system can't play in the next couple years.
I'm glad you've gotten such good use out of your system! -
Charles P. Jefferies Lead Moderator Super Moderator
While there's some good debate in this thread, the arguments are too intertwined for me to effectively remove. Thread is now closed.
OP - it looks like your question was answered; feel free to start a new thread if not.flingin likes this.
920XM VS Modern Cpus - Really ?
Discussion in 'Hardware Components and Aftermarket Upgrades' started by flingin, Feb 9, 2014.