The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    Is a quad core essential for decent PC gaming nowadays? Alienware 13 too weak?

    Discussion in 'Gaming (Software and Graphics Cards)' started by Moritsuna, Oct 31, 2015.

  1. Moritsuna

    Moritsuna Notebook Enthusiast

    Reputations:
    0
    Messages:
    34
    Likes Received:
    0
    Trophy Points:
    15
    I am looking at the Alienware 13. I am interested in a small alienware, because I actually like the thickness of the laptops. (feels more durable - much easier on the cooling).

    However, the 13 inch alienware only comes with a dual core instead of a quad core. I've read that many people prefer quad core immensely and that dual is pretty behind on the times.

    The games I want to really play are blade and soul/ overwatch (not too demanding) but also games like Total War: Warhammer. (This one is questionable)

    What do you think?
     
  2. Starlight5

    Starlight5 Yes, I'm a cat. What else is there to say, really?

    Reputations:
    826
    Messages:
    3,230
    Likes Received:
    1,643
    Trophy Points:
    231
    I believe that dual-core is not future-proof enough for demanding tasks. However, if you're OK with moving on after a couple of years, it should be capable enough. Another major concern is whether you want your games on Ultra settings, or just launch at any settings.
     
  3. sniffin

    sniffin Notebook Evangelist

    Reputations:
    68
    Messages:
    429
    Likes Received:
    256
    Trophy Points:
    76
    Quads aren't essential but they're a wiser investment. That CPU fits the category of good enough, but certainly not future proof.
     
  4. Game7a1

    Game7a1 ?

    Reputations:
    529
    Messages:
    3,159
    Likes Received:
    1,040
    Trophy Points:
    231
    If you aren't a heavy gamer, a dual core, even the ULV ones, will be fine. But if you are one, then the CPUs in the 13 may not be for you.
     
  5. Zymphad

    Zymphad Zymphad

    Reputations:
    2,321
    Messages:
    4,165
    Likes Received:
    355
    Trophy Points:
    151
    Depends on the game. For blade & soul, you should be fine considering the game is dated at this point. For Total War: Warhammer, you definitely want as many cores as you can afford with hyperthreading. This is a game that will be doing a lot, and will stress your machine to it's limit. All the total war games of the last few years have been CPU and GPU intensive.
     
  6. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    You'll be much better off with a quad-core CPU. I own an Alienware 13 and am "OK" with its ability to game, but it's definitely recognizable - the difference between quad-core and dual-core for gaming and other tasks. Maybe this system would be better suited for you. It's also cheaper.
     
  7. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    Still actually are extremely few games that benefit directly from four cores over two cores. Or, what you actually benefit from is microoptimisation during automatic allocation of multiple threads. Or.. two cores with hyperthreading isn't half as fast as four with hyperthreading, even if you run tasks that are very far from anything you would see in practical examples. And even then, if you ran something that would basically always feed each alu with commands that utilize cache hits (of reduced assembly) perfectly, the composite task needs to be asynchronously parallelizable, which really isn't something you would see in games. So obviously more cores are better, as long as they can be used (and a second l2-cacheable alu unit usually is - whether it is OS junk or whether it sometimes can be used to prepare threads quicker, or avoid context shifts - which again potentially allows better utilization of cache hits and microoptimisation). But even if it's typically going to increase performance, the benefit during automatic allocation in practical examples isn't going to be doubled. Which you can see very easily on benchmarks if you know where to look. I.e., that a higher clocked dual core will almost always outperform a lower clocked quad-core. While an i3 without hyperthreading and boost will fall off the scale - not because it has fewer cores, but because it has both lower speed and also no microcode-optimisation benefits from hyperthreading. (In the same way, the benefit on current architecture when moving to six or eight cores is very quickly going to cause infinitely diminishing returns).

    The reason for this is that 1. writing code that deliberately creates threads that are supposed to run exclusively on a set of cores (to avoid context shifts and to benefit from microcore optimisation between the math-units/alus) is not trivial. And 2. the actual benefit from doing so, without delving into extremely archaic platform-specific sorcery is very small. But you do see more and more games now that are programmed with discrete threads that do benefit in response and so on from avoiding context shifts, which you do achieve by having more cores. In the sense that it avoids interrupting other threads and potentially cause unpredictable latency that then results in pauses that you actually notice (input is read irregularly, node-generation suddenly is not as fast as it should be, framerate is variable, that sort of thing).

    So yes, on a laptop that runs relatively few cpu-bound tasks anyway (that games usually aren't - although code certainly can always be written to keep whatever cpu-time is there busy no matter what) - where what you really want is a cpu fast enough to serve a low-end graphics card. Then a dual-core with hyperthreading tends to be good enough. (But what you do not want is a cpu without hyperthreading that doesn't have boost, for example).
     
    moviemarketing likes this.
  8. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    A fast dual core with hyperthreading is usually OK for *most* games. But using terms like "hard core" or "heavy gamer" are too generic. You can only want to play a single game occasionally that does best with four cores (BF4 online for example), which does this make you "hard core"? Maybe not, but it doesn't change the fact that you really need a quad core for that game. It also all depends on what frame rates, detail, and resolution you're comfortable with. A quad core CPU will basically cover your bases should you decide to play a game that needs the horsepower. If you don't have the quad core, then it may make you frustrated later for not having it when a cool new game comes out that ends up being quad core optimized.
     
    killkenny1 and i_pk_pjers_i like this.
  9. ssj92

    ssj92 Neutron Star

    Reputations:
    2,446
    Messages:
    4,446
    Likes Received:
    5,690
    Trophy Points:
    581
    Make sure to get the i7 and the 1080p display options and you will be fine. The lower resolution will be more cpu bound where higher resolutions will be more gpu bound.


    My friend had a G3258 @ 4.2ghz and it bottlenecked him in 99% of his games, he was @ 720p. Even at 1080p he still bottlenecked in most of his games. It wasn't until he bought a 4690k where he was finally able to play everything smoothly.

    So it does depend on the game.
     
    i_pk_pjers_i likes this.
  10. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    The G3258 is a strict dual core CPU with no hyperthreading. You will truly tell the difference in gaming if you use that then throw in an equivalent TDP quad core even.
     
    D2 Ultima and i_pk_pjers_i like this.
  11. moviemarketing

    moviemarketing Milk Drinker

    Reputations:
    1,036
    Messages:
    4,247
    Likes Received:
    881
    Trophy Points:
    181
    If you are just playing older games and MOBAs on your AW13 with 960M, I would be surprised if the energy saver ULV dual core CPU would bottleneck too badly.

    Much more noticeable problem if you decide to connect to a external GTX 980 desktop card in the graphics amplifier and run GTA V, for example.

    There are other 13" - 15" laptops that are equally portable or more portable than the AW13, which include both quad core CPU and better graphics than 960M.
     
  12. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    Still.. I'm guessing the biggest benefit was moving to a processor with hyperthreading.
     
  13. ssj92

    ssj92 Neutron Star

    Reputations:
    2,446
    Messages:
    4,446
    Likes Received:
    5,690
    Trophy Points:
    581
    Hyperthreading does not give you a 30% increase. The G3258 was a 3.2ghz (no turbo) cpu overclocked to 4.2ghz (1ghz OC, 31%) and still bottlenecked badly.

    The new CPU installed was an i5 with no hyperthreading. Those two extra cores make a big difference.

    So if anyone wants to game on the AW13, I highly recommend the i7 model. It has more cache, slightly faster clock speeds and has hyperthreading (so does the i5 but still).

    I always recommend anyone buying a computer for any gaming to get the quad core IF the computer supports it, but in this case the i7 ULV dual core is the best option. With the 1080p display it should give a good gaming experience. Just don't get the 1366x768 display.
     
  14. Moritsuna

    Moritsuna Notebook Enthusiast

    Reputations:
    0
    Messages:
    34
    Likes Received:
    0
    Trophy Points:
    15
    Would I be able to play say, the most graphically intensive games on medium with this alienware? That sager someone posted looks like a great option.
     
  15. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    Yes it does.

    Dual core is still dual core. Hyperthreading takes better advantage of existing cores by giving them extra threads to run on. Some programs or scenarios simply do better by being on a CPU with 4 threads (not necessarily a "quadcore"). The ULV chips and desktop i3s fall into this category, and even though having less direct number-crunching power, it can work better.

    Yes, no hyperthreading, but 4 cores still means 4 threads. And of course unlike hyperthread cores, each core is near 100% extra power rather than the ~30% power that hyperthreading gives.

    The i7 is 3GHz and the i5 is 2.7GHz. In most cases, that won't make all the difference, I think, but at least the i7 is an improvement here. I remember some of the Haswell products used i5s that were 100MHz slower than the i7s and I couldn't for the life of me figure out why anyone would buy the i7 products.

    Well one computer would never support both a quadcore and a dual core if it is using ULV chips... the only time when this happens is for those i5 dual core and i7 quadcore machines, of which there are a ridiculously low amount that I've ever found selling.
     
  16. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Last edited: Nov 1, 2015
    i_pk_pjers_i likes this.
  17. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    Seriously.. digitalfoundry is still going? *sigh* "Hello, I post the whitepapers from our developer contacts ad verbatim or simply copy and paste them on our web-pages and call them articles! Enjoy, as much as I do the cashbacks, toodle-pip". Basically what happens over there is: PR guy claims something, such as that their team has put in effort to maximize the benefit of such and such new tech. And then Richard goes to great pains to find a way to prove what they said. If that means using selective data, or outright lying, that's no problem - DF is where you go to if you want to have your idiotic PR jargon supported by technical-sounding language. It's a brilliant synergy that the industry simply couldn't exist without.

    So to sum up: On a desktop setup, where you would have a titan x card to serve, to maximize the benefit from more cpu-time, you get somewhere nearby 30% increase when moving from two to four cores. You also go from 2 to 8 Mb level 2 cache, along with higher turbo frequency, which might of course have something to do with it - but DF won't tell you, nor test it. Because higher numbers and more expensive are always equal to higher performance, so why even bother testing it, right? :D

    Anyway. So that's the /maximum/ output from moving to four cores, in the best possible scenario. Food for thought?

    And Imagine what that looks like on a laptop, or even a mid-range gaming rig, where the game will be gpu-bound.

    Meanwhile - a more interesting question is if, on a gaming laptop, you would be able to clock down a quad-core (or turn off boost), and still have higher cpu-performance in practice over a higher clocked dual-core. That would be possible, if the games utilize extra cores. And it might also give you more manageable temperatures, and lower power-draw. It's also something that would lessen the impact when your gaming laptop hits critical temperatures, allowing a quad-core to outperform a dual-core with hyperthreading, even if the cooling is not really good enough. Which.. is a situation you do see on a lot of gaming laptops.
     
  18. HTWingNut

    HTWingNut Potato

    Reputations:
    21,580
    Messages:
    35,370
    Likes Received:
    9,877
    Trophy Points:
    931
    It all depends on the game, but in most cases, modern games will do better with dual core with hyper threading or full on quad core. Sometimes it doesn't matter how fast the CPU is when it needs to run more than two threads simultaneously.
     
    i_pk_pjers_i and hmscott like this.
  19. i_pk_pjers_i

    i_pk_pjers_i Even the ppl who never frown eventually break down

    Reputations:
    205
    Messages:
    1,033
    Likes Received:
    598
    Trophy Points:
    131
    Oh my god, that was so much bigger of a difference between dual core and quad core than I expected.
     
    hmscott likes this.
  20. Ethrem

    Ethrem Notebook Prophet

    Reputations:
    1,404
    Messages:
    6,706
    Likes Received:
    4,735
    Trophy Points:
    431
    Quad or nothing... the more threads the better. Games are finally getting optimized to take advantage of what Linux users have enjoyed for ... wow, a really long time. Always future proof to the best that your wallet can handle because you never know when a game will come that you want to play and gets wrecked because your CPU is holding it back.
     
    TomJGX and i_pk_pjers_i like this.
  21. J.Dre

    J.Dre Notebook Nobel Laureate

    Reputations:
    3,700
    Messages:
    8,323
    Likes Received:
    3,820
    Trophy Points:
    431
    Hyper-threading is the main thing you need to look for. You'll be "able" to game with a dual-core (ht) but you'll obviously do so much better with a quad-core (ht). Go for a quad-core. I don't recommend dual-core for gaming. The Alienware 13 can game but it's not a great gaming laptop.
     
    Last edited: Nov 1, 2015
  22. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    Here we are in late 2015, two years into the new generation of consoles which have 8-core CPUs, and you're still asking a question from 2007?
     
    hmscott likes this.
  23. killkenny1

    killkenny1 Too weird to live, too rare to die.

    Reputations:
    8,268
    Messages:
    5,258
    Likes Received:
    11,615
    Trophy Points:
    681
    They have AMD octa cores, meaning that they are useless :D
    /jk

    But seriously, if buying PC for now and you want it to be future proof, quad core is a must. Dual cores are fine for MS Word, music, movies aka light work, but for gaming quad cores can/will be an advantage (depending on a game).
     
  24. Moritsuna

    Moritsuna Notebook Enthusiast

    Reputations:
    0
    Messages:
    34
    Likes Received:
    0
    Trophy Points:
    15
    What a durable and small (13-14 inch) gaming laptop with quad core? I know there are some.. But the great ones end up being 2k.
     
  25. Game7a1

    Game7a1 ?

    Reputations:
    529
    Messages:
    3,159
    Likes Received:
    1,040
    Trophy Points:
    231
  26. Zymphad

    Zymphad Zymphad

    Reputations:
    2,321
    Messages:
    4,165
    Likes Received:
    355
    Trophy Points:
    151
    hmscott likes this.
  27. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    The upcoming P640RE should satisfy your requirements, though they are not for sale just yet.
     
    hmscott likes this.
  28. Zymphad

    Zymphad Zymphad

    Reputations:
    2,321
    Messages:
    4,165
    Likes Received:
    355
    Trophy Points:
    151
    Wow, that is beastly 13.3"

    I don't want it. But I want it just to say I have a 13" with a 970M and i7.
     
  29. Game7a1

    Game7a1 ?

    Reputations:
    529
    Messages:
    3,159
    Likes Received:
    1,040
    Trophy Points:
    231
    That's a 14" laptop, not a 13.3" laptop.
     
  30. Zymphad

    Zymphad Zymphad

    Reputations:
    2,321
    Messages:
    4,165
    Likes Received:
    355
    Trophy Points:
    151
    Oh who cares! It's better than anything from Alienware right now. In terms of it being awesome.
     
  31. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    It's a 14", actually.

    Yes, it's better than anything Alienware has.
     
    TomJGX likes this.
  32. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    Bruh you beat it. What is this madness?
     
  33. octiceps

    octiceps Nimrod

    Reputations:
    3,147
    Messages:
    9,944
    Likes Received:
    4,194
    Trophy Points:
    431
    [​IMG]
     
  34. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    It's... not a completely stupid question. But picture an old-fashioned switchboard with a limited number of lines, and a lot of impatient callers wanting to get through. In the analogy, the outgoing lines is equivalent to the bus, and the callers would be different threads. On an x86 intel board (just like on AMD so far), one lady at the switchboard working really, really fast might do the job. Two ladies working really well together might do it twice as fast. But adding more and more ladies won't really be much use, since everything has to go through the limited number of lines. Since then you would typically just have two-three ladies working, and the rest trying to get their hands into the wire-bundle once in a while. They might of course look really busy doing that, but it's not going to get the job done any faster.

    Political incorrectness aside - point is that to get much use out of 4+ cores on current standard architecture, you need to add an increasingly bigger layer of level 2 cache. And then structure the code in such a way that you can queue additional threads for executing in extremely short intervals - but not with too many threads so that the main thread response drops. For server type tasks, this isn't really a problem, because you can usually live with threads starving a little bit once in a while. But if you rely on a maximum response time for graphics, input, AI logic, node-generation, etc., then adding more and more threads on multiple cores (on an intel/x86 type bus) isn't automatically going to make things go quicker.

    And yeah, it was known on beforehand that thread response would drop on the amd consoles if all eight cores were active. Microsoft had a huge thing a while back about "freeing up a core", for example - bs: they were reducing the response time on non-essential system tasks, to make sure main threads weren't halted.

    Not that it makes more cores useless for any task - but we're not talking about assigning a single task to an extra core, and then therefore having more tasks executed at the same time. Instead, on an intel/amd/x86 bus, you're actually tying multiple herrings together, putting them through a grinder. And then as the pieces from all the tasks drop down (long before the entire fish is through), you mark the bits belonging to each herring, distribute the ground and labeled gobs, and process these across the available cores. Before sending them back in extremely small filets.

    That's basically how "multicore" works on current architecture. As in, it's not actually doing anything in parallel on the high level. And.. that's a long, long way from having each processor assigned to process specific memory areas in system ram within a set execution time, for example. Which would allow actual asynchronous multitasking.

    And yes, there are a few commercially available systems that can/could do that - but they're expensive compared to the subdivision schema on current x86 type implementations. I mean, understand that having the super-fast and expensive ram with actual access to the cpu limited to the 4-6mb of level 2 cache - that's a cost-saving measure. Back in the 90s, it made a lot of sense, because it would cost millions to make a system that had more than 512kb of level 2 cache, and it would not fit on a chip on the processor die if it was bigger anyway. But if you looked for a design that was capable of locking a portion of system-ram and completing a prepared task within a clock-cycle (i.e., you'd program algorithms in a relatively low-level language, and essentially have programmable instruction sets) - then there are commercially and practically viable options that can do this right now.

    In limited form, it's the kind of thing that has been used for a long time to allow fairly complex algorithms to execute on very low clock-speeds, for example. Your mp3-player from yorn that could process 320kbps mp3 on tiny chip for 20 hours on a aaa battery, without causing scraping noises that hurt your ears, and things like that - this works by reducing the algorithm on the hardware level to fit in a "long" instruction word, and executing this over a relatively small buffer, for example when the processor runs on as little as 4Mhz (and it still has clock-cycles to spare most of the time). And since the response time you need is very predictable, and you always know what's going to be processed next, there's no issue with having relatively slow ram. On the other hand, the actual benefit from putting this type of task into a much faster clocked processor for shorter instructions - is arguably not there, when it means more power consumption, and you still don't actually need increased response time. You could say that it is more customisable - that you can program in new codecs, and so on. But it's certainly ironic that the codecs programmed in this way are typically based on proprietary code anyway, and the devices themselves usually protected from tampering.

    Just making the point that - if the task is not infinitely subdividable, in the sense that each task can be split up into the smallest instructions, and then that these smallest parts can be executed in any order - then multicore on intel/amd has certain limitations that can't be overcome. In the same way - many of the tasks you actually do run are extremely predictable, and have to be completed in sequence. And that's why adding more cores doesn't yield infinitely increased performance - on this architecture.
     
  35. Ethrem

    Ethrem Notebook Prophet

    Reputations:
    1,404
    Messages:
    6,706
    Likes Received:
    4,735
    Trophy Points:
    431
    Zen is going to surprise a lot... Not on mobile but on desktop AMD can finally compete... They can do SMT with a lot of cores without the heat... But that's nothing new. AMD has dominated servers off and on since the 200MHz Pentium Pro... The question... Can AMD shove the K12 with Zen into mobile... OR shove Zen hexa or octa into mobile and pair it with a graphics chip that can match the 980M? It won't happen. That's the problem. AMD has no choice but to spin off if Zen fails. I did look up US law... Intel wouldn't have a choice but to license x86 to any potential purchaser... If old school AMD still exists, they'll convince Lisa Su to sell off the graphics... IF Zen falls flat... 40% IPC over Bulldozer would put AMD in line with Intel almost on the dot... But the second someone takes the AMD and runs it through its paces and gets a verified benchmark on hwbot, they will sell in droves... If AMD does what they are sort of kind of not really but maybe promising with the new drivers... We *may* have a real duopoly... But keep in mind that's fire... its like nVidia and Intel merging... you can't have it both ways... nVidia has no x86 license... This could turn out to be a really interesting year for PCs... Or an even more interesting year for consoles (my bet). Servers with 32 cores with SMT... Intel can't manage that on their current process... AMD can... AMD isn't going anywhere for anyone being hopeful... 64 threads... you would have to be burying your head in the sand to not realize that AMD not only got on par, they exceeded. Of course that's only IF the numbers pan out... but I'm not even a numbers person and 40% IPC seems incredibly logical just switching to SMT... People, including myself, don't realize that as usual, AMD has played their horrible cards in a terrific way... Imagine if Zen runs with lets say 2K/sec RAM... Xfire already has removed the other bottleneck... and if these things ship with 12MB and 16MB of cache... This should be a really fun year...
     
  36. D2 Ultima

    D2 Ultima Livestreaming Master

    Reputations:
    4,335
    Messages:
    11,803
    Likes Received:
    9,751
    Trophy Points:
    931
    If nVidrosoft and BGAtel become a duopoly for CPUs and AMD is killed off and BGAtel starts making dGPUs, I'm going to find a new blasted hobby.

    I'll learn swordsmanship and teach it or something.
     
  37. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    Or having SMT on the instruction level, you mean, to make the processors switch between areas in l1 cache. It's a complete obfuscation, you know. SMT architectures would have processors with independent memory and IO functionality. Intel and AMD architecture is about having as few inactive components in a transistor as technically possible, to save "metal", as someone I know put it. There's really no other overarching concern than that - and all "technological advances" that we're frequently briefed on, exist inside that paradigm. And it's interesting, it's not that. But the architecture has certain limitations that can't be overcome.

    But on the other hand, according to for example Anandtech, the future lies in .. I suppose.. streamlining your human input to a form that allows "simultaneous instructions per cycle", or microcode-optimisation, to end up making parallelism obsolete.

    It's simple, you see - you just need magical proprietary assemblers (or, you know, just hardcode reduced static code to the instruction layer), and a million "common usage tasks" will one day multitask just as efficiently as when executing prime95 code. Believe!

    Anyway, and you're right Intel doesn't own x86. But the format for licensing out x86 based cpus is limited.
    I guess that's one point of view. And I'm sure it would make sense for AMD to focus on single-core optimisation to match intel on the one hand, and then combine that with graphics modules on the same die, connected by a faster pipeline, and so on. Since that would bring them up to speed in the benchmarks, and possibly give them an advantage on graphics (and "compute", and so on).

    But it's not going to really do anything new for us from a technical perspective.