I am looking at the Alienware 13. I am interested in a small alienware, because I actually like the thickness of the laptops. (feels more durable - much easier on the cooling).
However, the 13 inch alienware only comes with a dual core instead of a quad core. I've read that many people prefer quad core immensely and that dual is pretty behind on the times.
The games I want to really play are blade and soul/ overwatch (not too demanding) but also games like Total War: Warhammer. (This one is questionable)
What do you think?
-
Starlight5 Yes, I'm a cat. What else is there to say, really?
I believe that dual-core is not future-proof enough for demanding tasks. However, if you're OK with moving on after a couple of years, it should be capable enough. Another major concern is whether you want your games on Ultra settings, or just launch at any settings.
-
Quads aren't essential but they're a wiser investment. That CPU fits the category of good enough, but certainly not future proof.
-
If you aren't a heavy gamer, a dual core, even the ULV ones, will be fine. But if you are one, then the CPUs in the 13 may not be for you.
-
Depends on the game. For blade & soul, you should be fine considering the game is dated at this point. For Total War: Warhammer, you definitely want as many cores as you can afford with hyperthreading. This is a game that will be doing a lot, and will stress your machine to it's limit. All the total war games of the last few years have been CPU and GPU intensive.
-
You'll be much better off with a quad-core CPU. I own an Alienware 13 and am "OK" with its ability to game, but it's definitely recognizable - the difference between quad-core and dual-core for gaming and other tasks. Maybe this system would be better suited for you. It's also cheaper.
-
Still actually are extremely few games that benefit directly from four cores over two cores. Or, what you actually benefit from is microoptimisation during automatic allocation of multiple threads. Or.. two cores with hyperthreading isn't half as fast as four with hyperthreading, even if you run tasks that are very far from anything you would see in practical examples. And even then, if you ran something that would basically always feed each alu with commands that utilize cache hits (of reduced assembly) perfectly, the composite task needs to be asynchronously parallelizable, which really isn't something you would see in games. So obviously more cores are better, as long as they can be used (and a second l2-cacheable alu unit usually is - whether it is OS junk or whether it sometimes can be used to prepare threads quicker, or avoid context shifts - which again potentially allows better utilization of cache hits and microoptimisation). But even if it's typically going to increase performance, the benefit during automatic allocation in practical examples isn't going to be doubled. Which you can see very easily on benchmarks if you know where to look. I.e., that a higher clocked dual core will almost always outperform a lower clocked quad-core. While an i3 without hyperthreading and boost will fall off the scale - not because it has fewer cores, but because it has both lower speed and also no microcode-optimisation benefits from hyperthreading. (In the same way, the benefit on current architecture when moving to six or eight cores is very quickly going to cause infinitely diminishing returns).
The reason for this is that 1. writing code that deliberately creates threads that are supposed to run exclusively on a set of cores (to avoid context shifts and to benefit from microcore optimisation between the math-units/alus) is not trivial. And 2. the actual benefit from doing so, without delving into extremely archaic platform-specific sorcery is very small. But you do see more and more games now that are programmed with discrete threads that do benefit in response and so on from avoiding context shifts, which you do achieve by having more cores. In the sense that it avoids interrupting other threads and potentially cause unpredictable latency that then results in pauses that you actually notice (input is read irregularly, node-generation suddenly is not as fast as it should be, framerate is variable, that sort of thing).
So yes, on a laptop that runs relatively few cpu-bound tasks anyway (that games usually aren't - although code certainly can always be written to keep whatever cpu-time is there busy no matter what) - where what you really want is a cpu fast enough to serve a low-end graphics card. Then a dual-core with hyperthreading tends to be good enough. (But what you do not want is a cpu without hyperthreading that doesn't have boost, for example).moviemarketing likes this. -
A fast dual core with hyperthreading is usually OK for *most* games. But using terms like "hard core" or "heavy gamer" are too generic. You can only want to play a single game occasionally that does best with four cores (BF4 online for example), which does this make you "hard core"? Maybe not, but it doesn't change the fact that you really need a quad core for that game. It also all depends on what frame rates, detail, and resolution you're comfortable with. A quad core CPU will basically cover your bases should you decide to play a game that needs the horsepower. If you don't have the quad core, then it may make you frustrated later for not having it when a cool new game comes out that ends up being quad core optimized.
killkenny1 and i_pk_pjers_i like this. -
Make sure to get the i7 and the 1080p display options and you will be fine. The lower resolution will be more cpu bound where higher resolutions will be more gpu bound.
My friend had a G3258 @ 4.2ghz and it bottlenecked him in 99% of his games, he was @ 720p. Even at 1080p he still bottlenecked in most of his games. It wasn't until he bought a 4690k where he was finally able to play everything smoothly.
So it does depend on the game.i_pk_pjers_i likes this. -
The G3258 is a strict dual core CPU with no hyperthreading. You will truly tell the difference in gaming if you use that then throw in an equivalent TDP quad core even.
D2 Ultima and i_pk_pjers_i like this. -
moviemarketing Milk Drinker
If you are just playing older games and MOBAs on your AW13 with 960M, I would be surprised if the energy saver ULV dual core CPU would bottleneck too badly.
Much more noticeable problem if you decide to connect to a external GTX 980 desktop card in the graphics amplifier and run GTA V, for example.
There are other 13" - 15" laptops that are equally portable or more portable than the AW13, which include both quad core CPU and better graphics than 960M. -
-
Hyperthreading does not give you a 30% increase. The G3258 was a 3.2ghz (no turbo) cpu overclocked to 4.2ghz (1ghz OC, 31%) and still bottlenecked badly.
The new CPU installed was an i5 with no hyperthreading. Those two extra cores make a big difference.
So if anyone wants to game on the AW13, I highly recommend the i7 model. It has more cache, slightly faster clock speeds and has hyperthreading (so does the i5 but still).
I always recommend anyone buying a computer for any gaming to get the quad core IF the computer supports it, but in this case the i7 ULV dual core is the best option. With the 1080p display it should give a good gaming experience. Just don't get the 1366x768 display. -
Would I be able to play say, the most graphically intensive games on medium with this alienware? That sager someone posted looks like a great option.
-
Dual core is still dual core. Hyperthreading takes better advantage of existing cores by giving them extra threads to run on. Some programs or scenarios simply do better by being on a CPU with 4 threads (not necessarily a "quadcore"). The ULV chips and desktop i3s fall into this category, and even though having less direct number-crunching power, it can work better.
Yes, no hyperthreading, but 4 cores still means 4 threads. And of course unlike hyperthread cores, each core is near 100% extra power rather than the ~30% power that hyperthreading gives.
The i7 is 3GHz and the i5 is 2.7GHz. In most cases, that won't make all the difference, I think, but at least the i7 is an improvement here. I remember some of the Haswell products used i5s that were 100MHz slower than the i7s and I couldn't for the life of me figure out why anyone would buy the i7 products.
Well one computer would never support both a quadcore and a dual core if it is using ULV chips... the only time when this happens is for those i5 dual core and i7 quadcore machines, of which there are a ridiculously low amount that I've ever found selling. -
There are many 2 core vs 4 core comparisons on youtube, google too:
https://www.youtube.com/results?search_query=cpu+2+core+vs+4+core+gaming+performance
Here are a couple, from the top of the search:
There is also a previous forum @ notebookreview, now closed for comment:
Quad core i7 vs. dual core i5 for gaming
http://forum.notebookreview.com/threads/quad-core-i7-vs-dual-core-i5-for-gaming.608369/Last edited: Nov 1, 2015i_pk_pjers_i likes this. -
Seriously.. digitalfoundry is still going? *sigh* "Hello, I post the whitepapers from our developer contacts ad verbatim or simply copy and paste them on our web-pages and call them articles! Enjoy, as much as I do the cashbacks, toodle-pip". Basically what happens over there is: PR guy claims something, such as that their team has put in effort to maximize the benefit of such and such new tech. And then Richard goes to great pains to find a way to prove what they said. If that means using selective data, or outright lying, that's no problem - DF is where you go to if you want to have your idiotic PR jargon supported by technical-sounding language. It's a brilliant synergy that the industry simply couldn't exist without.
So to sum up: On a desktop setup, where you would have a titan x card to serve, to maximize the benefit from more cpu-time, you get somewhere nearby 30% increase when moving from two to four cores. You also go from 2 to 8 Mb level 2 cache, along with higher turbo frequency, which might of course have something to do with it - but DF won't tell you, nor test it. Because higher numbers and more expensive are always equal to higher performance, so why even bother testing it, right?
Anyway. So that's the /maximum/ output from moving to four cores, in the best possible scenario. Food for thought?
And Imagine what that looks like on a laptop, or even a mid-range gaming rig, where the game will be gpu-bound.
Meanwhile - a more interesting question is if, on a gaming laptop, you would be able to clock down a quad-core (or turn off boost), and still have higher cpu-performance in practice over a higher clocked dual-core. That would be possible, if the games utilize extra cores. And it might also give you more manageable temperatures, and lower power-draw. It's also something that would lessen the impact when your gaming laptop hits critical temperatures, allowing a quad-core to outperform a dual-core with hyperthreading, even if the cooling is not really good enough. Which.. is a situation you do see on a lot of gaming laptops. -
It all depends on the game, but in most cases, modern games will do better with dual core with hyper threading or full on quad core. Sometimes it doesn't matter how fast the CPU is when it needs to run more than two threads simultaneously.
i_pk_pjers_i and hmscott like this. -
i_pk_pjers_i Even the ppl who never frown eventually break down
hmscott likes this. -
Quad or nothing... the more threads the better. Games are finally getting optimized to take advantage of what Linux users have enjoyed for ... wow, a really long time. Always future proof to the best that your wallet can handle because you never know when a game will come that you want to play and gets wrecked because your CPU is holding it back.
TomJGX and i_pk_pjers_i like this. -
Hyper-threading is the main thing you need to look for. You'll be "able" to game with a dual-core (ht) but you'll obviously do so much better with a quad-core (ht). Go for a quad-core. I don't recommend dual-core for gaming. The Alienware 13 can game but it's not a great gaming laptop.
Last edited: Nov 1, 2015 -
Here we are in late 2015, two years into the new generation of consoles which have 8-core CPUs, and you're still asking a question from 2007?
hmscott likes this. -
killkenny1 Too weird to live, too rare to die.
/jk
But seriously, if buying PC for now and you want it to be future proof, quad core is a must. Dual cores are fine for MS Word, music, movies aka light work, but for gaming quad cores can/will be an advantage (depending on a game). -
What a durable and small (13-14 inch) gaming laptop with quad core? I know there are some.. But the great ones end up being 2k.
-
Within your budget, you really don't have that many options.
http://www.xoticpc.com/force-msi-1492-31028-850m-msi-ge40-barebones-p-7082.html
http://www.hidevolution.com/clevo-hid-w230sd-i7-2g-gtx-960m-13-3.html
http://www.hidevolution.com/clevo-hid-w230ss-i7-13-3.html
http://www.newegg.com/Product/Product.aspx?Item=N82E16834725007
And these are discontinued/older version laptops, if I recall. Nonetheless, these contain quad core CPU, and I would think all but the Aorus laptop are durable. -
I would forget 13" gaming and spend a little bit and get a 15" with GTX 970M and IPS display.
http://www.lpc-digital.com/sager-np8657-features.html
I would not get the 4K option, stick to 1080p.hmscott likes this. -
hmscott likes this.
-
I don't want it. But I want it just to say I have a 13" with a 970M and i7. -
-
-
Yes, it's better than anything Alienware has.TomJGX likes this. -
-
Political incorrectness aside - point is that to get much use out of 4+ cores on current standard architecture, you need to add an increasingly bigger layer of level 2 cache. And then structure the code in such a way that you can queue additional threads for executing in extremely short intervals - but not with too many threads so that the main thread response drops. For server type tasks, this isn't really a problem, because you can usually live with threads starving a little bit once in a while. But if you rely on a maximum response time for graphics, input, AI logic, node-generation, etc., then adding more and more threads on multiple cores (on an intel/x86 type bus) isn't automatically going to make things go quicker.
And yeah, it was known on beforehand that thread response would drop on the amd consoles if all eight cores were active. Microsoft had a huge thing a while back about "freeing up a core", for example - bs: they were reducing the response time on non-essential system tasks, to make sure main threads weren't halted.
Not that it makes more cores useless for any task - but we're not talking about assigning a single task to an extra core, and then therefore having more tasks executed at the same time. Instead, on an intel/amd/x86 bus, you're actually tying multiple herrings together, putting them through a grinder. And then as the pieces from all the tasks drop down (long before the entire fish is through), you mark the bits belonging to each herring, distribute the ground and labeled gobs, and process these across the available cores. Before sending them back in extremely small filets.
That's basically how "multicore" works on current architecture. As in, it's not actually doing anything in parallel on the high level. And.. that's a long, long way from having each processor assigned to process specific memory areas in system ram within a set execution time, for example. Which would allow actual asynchronous multitasking.
And yes, there are a few commercially available systems that can/could do that - but they're expensive compared to the subdivision schema on current x86 type implementations. I mean, understand that having the super-fast and expensive ram with actual access to the cpu limited to the 4-6mb of level 2 cache - that's a cost-saving measure. Back in the 90s, it made a lot of sense, because it would cost millions to make a system that had more than 512kb of level 2 cache, and it would not fit on a chip on the processor die if it was bigger anyway. But if you looked for a design that was capable of locking a portion of system-ram and completing a prepared task within a clock-cycle (i.e., you'd program algorithms in a relatively low-level language, and essentially have programmable instruction sets) - then there are commercially and practically viable options that can do this right now.
In limited form, it's the kind of thing that has been used for a long time to allow fairly complex algorithms to execute on very low clock-speeds, for example. Your mp3-player from yorn that could process 320kbps mp3 on tiny chip for 20 hours on a aaa battery, without causing scraping noises that hurt your ears, and things like that - this works by reducing the algorithm on the hardware level to fit in a "long" instruction word, and executing this over a relatively small buffer, for example when the processor runs on as little as 4Mhz (and it still has clock-cycles to spare most of the time). And since the response time you need is very predictable, and you always know what's going to be processed next, there's no issue with having relatively slow ram. On the other hand, the actual benefit from putting this type of task into a much faster clocked processor for shorter instructions - is arguably not there, when it means more power consumption, and you still don't actually need increased response time. You could say that it is more customisable - that you can program in new codecs, and so on. But it's certainly ironic that the codecs programmed in this way are typically based on proprietary code anyway, and the devices themselves usually protected from tampering.
Just making the point that - if the task is not infinitely subdividable, in the sense that each task can be split up into the smallest instructions, and then that these smallest parts can be executed in any order - then multicore on intel/amd has certain limitations that can't be overcome. In the same way - many of the tasks you actually do run are extremely predictable, and have to be completed in sequence. And that's why adding more cores doesn't yield infinitely increased performance - on this architecture. -
Zen is going to surprise a lot... Not on mobile but on desktop AMD can finally compete... They can do SMT with a lot of cores without the heat... But that's nothing new. AMD has dominated servers off and on since the 200MHz Pentium Pro... The question... Can AMD shove the K12 with Zen into mobile... OR shove Zen hexa or octa into mobile and pair it with a graphics chip that can match the 980M? It won't happen. That's the problem. AMD has no choice but to spin off if Zen fails. I did look up US law... Intel wouldn't have a choice but to license x86 to any potential purchaser... If old school AMD still exists, they'll convince Lisa Su to sell off the graphics... IF Zen falls flat... 40% IPC over Bulldozer would put AMD in line with Intel almost on the dot... But the second someone takes the AMD and runs it through its paces and gets a verified benchmark on hwbot, they will sell in droves... If AMD does what they are sort of kind of not really but maybe promising with the new drivers... We *may* have a real duopoly... But keep in mind that's fire... its like nVidia and Intel merging... you can't have it both ways... nVidia has no x86 license... This could turn out to be a really interesting year for PCs... Or an even more interesting year for consoles (my bet). Servers with 32 cores with SMT... Intel can't manage that on their current process... AMD can... AMD isn't going anywhere for anyone being hopeful... 64 threads... you would have to be burying your head in the sand to not realize that AMD not only got on par, they exceeded. Of course that's only IF the numbers pan out... but I'm not even a numbers person and 40% IPC seems incredibly logical just switching to SMT... People, including myself, don't realize that as usual, AMD has played their horrible cards in a terrific way... Imagine if Zen runs with lets say 2K/sec RAM... Xfire already has removed the other bottleneck... and if these things ship with 12MB and 16MB of cache... This should be a really fun year...
-
If nVidrosoft and BGAtel become a duopoly for CPUs and AMD is killed off and BGAtel starts making dGPUs, I'm going to find a new blasted hobby.
I'll learn swordsmanship and teach it or something. -
But on the other hand, according to for example Anandtech, the future lies in .. I suppose.. streamlining your human input to a form that allows "simultaneous instructions per cycle", or microcode-optimisation, to end up making parallelism obsolete.
It's simple, you see - you just need magical proprietary assemblers (or, you know, just hardcode reduced static code to the instruction layer), and a million "common usage tasks" will one day multitask just as efficiently as when executing prime95 code. Believe!
Anyway, and you're right Intel doesn't own x86. But the format for licensing out x86 based cpus is limited.
But it's not going to really do anything new for us from a technical perspective.
Is a quad core essential for decent PC gaming nowadays? Alienware 13 too weak?
Discussion in 'Gaming (Software and Graphics Cards)' started by Moritsuna, Oct 31, 2015.