Okay, but I still don't see how your prescribed method of testing would prove the results you want see.
I am 100% sure that there are other factors that affect the cpu - the Z170 chipset, for one. It doesn't matter what they are individually. The end result is the only thing that matters. For me, that end result is productivity which is itself a culmination of different processes and interactions that get me to that final goal and reason of using a tool such as a computer in the first place.
There are many examples where the better theoretical value does not contribute, or worse, takes away from the actual affect we are after. Even in the articles I linked you can see that in the graphs for different 'scores' and setups they put through their paces. The results from real world workloads do not bear out the theoretically expected gains - and not just a few % different - they are opposite of what they 'should' be.
Working by process of elimination is great for troubleshooting, but not so adaptable to seeing the overall affect of a new product/component/platform.
If I am assessing a new platform for my workflows, I do not care about the differences nor do I care about making them equal in any way except for ensuring the workflow and load is equal (at least between platforms). The differences are what I hope for to make my work go faster and I do not want to test to predict that... I can and do test for that directly.
Any problem discovered will usually not be user addressable in any event. We simply decide if we want to use a given solution or not - with the product as offered.
If we were comparing a single component (like just the cpu - and not the cpu + chipset + M/B as in this instance), I agree that keeping things as similar as possible is desirable for an accurate comparison. But doing that between platforms is kind of funny, I think.
Apples to apples only apples for tech at the same generational level. When gen's jump - we should let them and see how far they can fly.
See:
http://www.tomsitpro.com/articles/intel-3d-nand-p3608-p3520,1-2792.html
Now we can see where all the Skylake DMI 3.0 connections can be used. Along with the updated IRST tech driver too.![]()
Making older systems work harder is commendable (if the negatives can be minimized). Staying current with tech platforms as much as possible and adapting to take advantage of it fully is still the best move forward. IPC is just a small part of the overall improvements a new platform makes.
See:
http://www.eteknix.com/intel-skylake-i7-6700k-benchmarks-leaked/
See:
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-4790K+@+4.00GHz&id=2275
If we can believe those passmark numbers for the Skylake i7-6700K in the first link above, it is ~11% faster overall than the highest/latest i7-4790K offers. If I was able to translate that directly to productivity - it would pay for itself in the first few hours of ownership.
Now, all you have to do is find a test of the game (or games) that would make you want to change to this platform or not. IPC scores in an apple to apple comparison? Nice in theory, but not relevant for the final use of the product at it's default/natural state.
-
tilleroftheearth Wisdom listens quietly...
-
Well see that's the thing. If I can get a general trend going for IPC (physics-based tests, etc) and then in some places I see performance take a dive, then I know it's the chipset, if RAM and the GPU etc are all equal.
You're not understanding the point of apples-to-apples comparisons. They show what the differences are.
If I tossed a set of random hardware in a system and I tossed another set and then I suddenly get say... less performance in sony vegas than I did on my old Ivy Bridge machine, and the RAM timings/speed are different as well as architecture/CPU as well as the GPU etc then how'd I know what to look for?
Determining the benefits and/or downsides of architecture/IPC/RAM/etc needs to be taken one step at a time. Once you've gotten one bit down to a predictable science, then you can test other things. For example, if I know that all other things equal, skylake's chipset is a problem for... unzipping zipped files or something, then if I am testing RAM speed/latency differences, I'll be able to account for it there. "Oh, unzipping files is even slower when I relax RAM timings, so it looks like this is one area Skylake is bad in, etc." so now people can know that tweaking RAM is important for <insert affected areas> or whatever. -
tilleroftheearth Wisdom listens quietly...
Yeah, not random hardware, agreed. Lol...
What works and what doesn't for my workflows has been proven and updated over the last few decades or so. My workflow, in the complete version (including installing the O/S and the Programs from scratch), covers almost all possible performance aspects of a modern computer (except for extreme game level gpu's, which I have no use or desire (heat/power/noise) for. Given the above, I do not need detailed stats at each level of the pipe to be able to know if an upgrade is an upgrade. I have given up certain aspects (snappiness) when moving platforms, O/S's and single components knowingly at certain points, but not if overall the productivity was affected.
I am quite positive that if the chipset (for example) was actually at fault, not only could you not fix it, but you couldn't pinpoint it out either. The platform masks everything.
Sure, depending on what and how much it affects the overall performance, upgrading or tweaking another component like RAM might offset that built-in fault. But that is not a fix; it's a kludge. And in the end, it is what it is: that platform needs 'x' RAM settings to be viable... else; fuhgeddaboudit.
I am glad there are people like you in the world that care about such minutiae (so I don't have to), but I tend to think that Intel cares much, much, more. Otherwise, they wouldn't spend billions and billions to bring us something new every few months.
This isn't a predictable science, like I mentioned before. It is a world where if you let someone rob you blind, they will (while you're still thanking them and apologizing for bumping into them).
If your most strenuous workflow is a game or games for your hardware, let those be the judge for what you buy. Including power used during those games, idle load considerations, noise and cooling solutions needed vs. the risks they entail and any other factor that is directly relevant to when you're using that system.
Knowing that DDR4 2133MHz RAM is to be avoided at all costs on a Skylake platform is not anything that needs to be tested further anymore. Even anandtech hinted at that back in February 2015 even on an older platform - Haswell-E.
See:
http://www.anandtech.com/show/8959/...3200-with-gskill-corsair-adata-and-crucial/10
But that becomes doubly true on the latest Skylake platform to enable a performance increase vs. any older gen platform we compare it to. (See the TweakTown links previously).
Btw, relaxing RAM timings is bad for compressing/uncompressing routines no matter what platform is compared.
Using synthetic tests (that occasionally point us the wrong way...) and trying to combine them (or summarize them) into one useful recommendation is kind of backwards.
Test for the result you want instead.
Just like a marathon athlete trains by running marathons (he/she is not judged by how many sit ups or how many leg presses they can do at what weight...) simply test for the scenario you want increased performance in too.
Mobile Skylake launching September 2015
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Cloudfire, May 20, 2015.