do true blue 6 core laptops cpus exist? like a 6 core i7 mobile?
-
No but there are mobile workstations and servers with 6+ core desktop i7's and Xeons.
-
Nope, unless Clevo has desktop replacement laptops with desktop 6-core CPUs.
-
Not sure what you mean by 'true blue" but
Intel i7 quad cores (MQ, HQ) have hyperthreading so they support two threads per core, or 8 logical threads.
Samsung's Chromebook 2 uses its Exynos 5 octo-core ARM (in big.LITTLE, 4 big + 4 LITTLE core configuration)
Both of those are 8 not 6, so not "true blue" in that sense.
AMD counts "compute cores"
3 core CPU + 2 core GPU = 5 compute cores
4 core CPU + 4 core GPU = 8 compute cores
4 core CPU + 6 core GPU = 10 compute cores
etc.
But if you don't care about the GPU threads then those may not be "true blue" either. -
King of Interns Simply a laptop enthusiast
True desktop replacement laptops like the D900F have featured full desktop CPU support for 6 core CPU's since before 2010. The D900F is that old and can support up to 990X and 980X for example.
So yes 6 cores in a laptop exists but I don't think any true laptop CPU's have been released yet featuring more than 4 physical cores. -
Karamazovmm Overthinking? Always!
There are no laptops cpus that come with 6+ cores.
there are however other notebooks that come with desktop cpus and those can have 6+ cores. Basically only clevo makes those -
Those extra cores don't add much for routine light activities (surfing, watching movies, office productivity). Where it'll really shine is for video encoding, games that scale beyond 4 cores (BF4 and Crysis 3 comes to mind) and other heavily multi-threaded tasks.
My 4930K desktop does feel snappier but only because it's overclocked to 4.5GHz on all 6 cores as opposed to my 4900MQ that's running stock (3.6-3.8GHz). Drop it back to stock and it feels about the same. -
-
Yeah I should've added core speed is more important than having 2 extra cores for most situations, unless you know you can utilize those extra cores.
That said having both will guarantee maximum performance. -
Charles P. Jefferies Lead Moderator Super Moderator
We reviewed some of those Clevo's with the six-core CPUs. Here's the most recent:
AVADirect Clevo P570WM Review
As noted they are mobile but not really - 2.5" thick, 13.8 lbs and a 53 minute battery life. And don't forget the humongous dual 330W power supplies (I believe they can operate off of one depending on the configuration; our review unit was likely supplied with two due to the SLI'd Nvidia graphics cards). The whole package ends up being about 20 pounds. I always wanted one despite how ridiculous they are. lol.alexhawker likes this. -
Meaker@Sager Company Representative
They are fun
No power limits and you can do what you want with it.
Plus by clamping the GPUs to P8 you can extend the battery life a fair amount.
Definitely heavy though! -
-
No true blue ones, only white ones.
-
The thing is that outside microcode reduction (that doesn't really exist between "cores" as of yet anyway, unless a shared store currently contains the result of a new request), more cores can only be used to effectively run independent contexts. Virtual servers, that sort of thing. Outside of that - parallelism on an x86 instruction set, or any instruction set that isn't based on an explicitly parallel assembly language, isn't going to benefit from increased number of cores.
Instead, what you're actually running into if you try to run a few tests, is that the overhead from context shifts is going to be higher from maintaining many active threads (in spite of many "cores", never mind that the response times towards the bus become unpredictable), than from running a scheduler programmatically. It's not useless to trust in high-level optimisations and creating threads, but you can't expect more cores to increase calculation performance unless you're running independent contexts (and they don't draw too much common resources and bandwidth - such as memory and IO, bus width, etc.).
And.. that's not really a surprise. Because that's how the instruction set is made to function. And that's what the platform and the current industry standard was meant to achieve.
...just making the point that there's not going to be any point whatsoever to get more cores, or to have microcores with their own instruction set acceleration perhaps. Or having OpenCL "scale" over gpu units and peripheral devices (basically the same thing as those microcores) -- because the response time through the current standard for buses isn't fast enough or broad enough.
*shrug*Jarhead likes this. -
Just thought of something: how would a true 8 core with HT off compare to a quad core with HT on?
-
-
The 5960X would win for sure, but at a much higher TDP ofc. HT can't make up the massive gulf in core count and cache sizes.
In the most optimized applications, Hyper-Threading can improve performance by almost 30%, which is definitely a net win considering the marginal increases in die area, heat, and power consumption it brings, hence its popularity on Intel chips for what has been a very long time now. -
Video encoding on the other hand did perform noticeable better with hyperthreading. -
Yeah I meant 5960X vs something like 4770K.
Actually I should probably clarify something. What got me thinking was if an application was not optimized for multithreading and to scale beyond 4 physical cores, how said application would perform with a true 8 physical core processor compared to one with 4 physical cores but 8 logical cores. -
Interesting to note, 4 cores w/HT is still not as fast as 6 cores w/o HT, let alone 8. -
-
<del>It makes sense on paper, but I'm curious why that would still be the case for legacy applications that can't even multithread properly? (or would it still hold true more cores = better in that scenario?)</del>
ninja'd
and good to know I'm not going crazy lolocticeps likes this. -
Single-threaded app = no performance benefit to more cores or HT, only more clock speed.
-
Eventually, the simulation wouldn't scale too well, but that will happen at cluster levels where you can schedule dozens of cores. -
So it's one of those programs that responds favorably to more physical cores rather than more efficient thread scheduling (HT).
-
EDIT: looking a little more into it, it seems to be the same for other numerical methods like finite volumes. To be honest, I've used finite elements before, but it's not really my domain, I'll ask friends who are doing their PhDs on CFD when I get the chance.
Seems like HT may help in some cases, but it's not easy to determine since a lot of factors will come into play. If you go nuts with the # of cores, you'll end up with the communication between the CPUs being a bottleneck, then there's memory access, depending on how many memory channels you have, you could end up I/O bound with memory and so on. -
I feel like I should install Spartan and fire it up to see if it's able to take advantage of 6 cores.
-
Also remember that in some situations HTT can actually cause a degrade in performance. -
The reason for that is perfectly demonstrated with a video decode over increased threads, or with something that really consists of the same instruction set resolution running over and over on different data. What happens here is that the context changes are cheap, and when each of the threads run in turn, the actual instruction to run on memory is already completed. And the new threads take advantage of that.
If, on the other hand, you ran threads that require completely different instruction resolution every time. The the context changes are expensive and running things "concurrently" isn't really giving you many benefits on the actual computation element. Instead, we're really talking about being able to prepare for the calculation runs asynchronously. Which is useful if the math operation on the core that's reduced from a more complex instruction set is repeated over and over again.
But the thing is that because the cores on an intel platform don't actually have access to main memory, you end up waiting for the main thread to complete something anyways. It's completely useless if you wanted to maintain a main thread and asynchronously run different instruction sets on working memory.
Basically, you can't scale over more intel cores unless you have operations that can be divided into separate and independent tasks, that also are small enough to have a short preparation stage. -
Ht uses un used parts of the core, sometimes time sensitive or priory calls will get handed over to hyper thread instead of actual core and either has to work at diminished capacity OR has to wait for the for current computation to finish. some games and apps work better with HT turned off.
I was more interested in a six core due to VMing and multitasking. -
-
Bottom line, there are no 6-core mobile CPU's. Your only option is for a six core desktop CPU in the Clevo beast.
-
Which sadly has been discontinued.
-
edit: maybe not... see here: http://forum.techinferno.com/clevo/7721-imagine-new-clevo-models-5.html#post108283 -
But since reduced code like that tends to end up being identical instructions, it's actually rare that hyperthreading (as in duplicating the low-level registers and sharing two instances over the same core) doesn't give you free execution cycles. Or, that there's room on the computation elements while one instruction set queue is waiting for another. Basically, the processor can make a couple of extra identical runs for the other duplicated memory area before continuing. And.. counterintuitively, in general you're better off writing code that has a wide range of different calls rather than identical ones. Because that mix on the assembly code level is more likely to yield opportunities for the machine level interpreter to steal cycles. It's a form of instruction level parallelism, I think the term is. And that is really where most improvements have been made since the Pentium was introduced. They employ a limited form of simd in a sense, transparently on the hardware layer - and kept on that level is what keeps the overhead acceptable. Programming your own instructions is not an option on this architecture.. Closest we're getting to that at the moment is OpenCL over a graphics card's execution units. That you write your own low-level code specifically designed to have simple maths executed in several semi-independent parts on a memory area. Cheating a curve with some interpolation, that sort of thing.
Cores on the level above that only exchange information via cache ("level 3" cache. Level 2 and level 1 is for each core). But that's how the cores are able to support core-level parallelism, in the sense that one core can maintain several hardware-level threads (pointing back to the crunched assembly code - one function call could spawn several hardware threads). And the thing is that outside the hardware level optimisations, cores don't usually share execution time on the same thread (to speed up the execution of one single hardware thread). So that's the second great advance. That a single core can switch between hardware threads with very little overhead
Instead, multiple cores on the same processor is what can support execution of different OS-level threads. Which again is usually handled most effectively by the compiler. If you program threads and assign them to a specific core, and expect something to execute in time for the main thread, etc. - that's always risky, because you won't necessarily have complete control over what else is run on the core, and the execution time could be variable anyway (or turn out to be more effectively handled by the computer and the compiler). So like with the instruction level parallelism, the goal with creating threads is really to create threads with tasks that easily can take advantage of the OS' scheduler. One common way games-programmers do it is to have one thread for graphics and one for core logic, for example. That's.. really it. Optimising one thread to fetch and place information on the graphics card array (then optimise that code independently, perhaps phasing in physics or particular routines that scale well over gpu elements), and having another to do maintenance and updates in memory otherwise.
And as said, the problem here is that increasing the number of cores on the same independent processing element has a nasty tendency to end up with diminishing returns very quickly. In particular for real-time computation tasks. Whether it is games or graphics intense programs, and all the way down to just dealing with input and scrolling gracefully in a graphical ui. You know, stuff like drawing a perfect curve in real-time incurs the wrath of the computer gods. This is kind of a problem, and it's what hopefully will force the next.. 50 years or so to end up with companies dusting off the simd designs from the 70s again, and force ahead a common industry standard for an explicitly parallel assembly language. One that scales over different processing elements with their own registers, and where those processing elements share execution memory in a decent size capable of storing the program logic.
This.. doesn't require all that much, though. Size isn't really that much of a concern as many think - you could have slower external ram on top of it. The tech exists, but it requires a different approach to the architecture design. And ddr3&4 and so on isn't really sufficient. You'd need ability to lock and write concurrently to different areas at the same time with the different processing elements to be at all able to exploit multicore parallelism. In the sense of having several processor elements execute instructions concurrently on the same memory area to complete that software "OS"-level thread.
That.. contrary to the impression people seem to be having very often.. isn't actually occurring on a hexacore intel or amd processor. Where shifting context between processors is, practically by definition, less efficient than using one core. Because you have only one core executing useful computations towards the product at one time. I.e., adding another core to the same thread doesn't increase computation potential, but only overhead during context shifts.
Funny thing, yeah?Jarhead likes this. -
Here's an old post from a few years back regarding HTT. 8 Physical cores vs 4 Physical 8 Threads? - Page 3 - AnandTech Forums, unfortunately some of the screen shots have been killed by imageshack :/
Seems even the smart phones now days are coming out with eight cores. HTC Desire 820, Desire 820q, and Desire 816G Launched in India | NDTV Gadgets -
..arm chipsets have cores with custom instruction sets, though. Set from the manufacturer as per order. The "graphics card" on several arm-units is actually an arm core with a custom instruction set, for example.
But I guess that's the way we're going. Cheap "special purpose" cores for each typical common task the computer performs. That each operate independently, which in turn cheapens the manufacturing cost and makes factory-line reuse easier. Nintendo from the 80's must be laughing pretty hard at that one..Mr.Koala likes this. -
-
Jet engine sound is what happens. There are a ton of fans too 3 or 4 instead of two and the heatsinks are monstrous.
-
6 core laptop CPUs?
Discussion in 'Hardware Components and Aftermarket Upgrades' started by cdoublejj, Sep 28, 2014.