The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    Ryzen vs i7 (Mainstream); Threadripper vs i9 (HEDT); X299 vs X399/TRX40; Xeon vs Epyc

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by ajc9988, Jun 7, 2017.

  1. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    thats what i like to think too. however it seems not the case
     
  2. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,840
    Likes Received:
    59,615
    Trophy Points:
    931
    The only thing Intel can brag about regarding mainstream. The rest is just sad.
     
    ole!!! and ajc9988 like this.
  3. don_svetlio

    don_svetlio In the Pipe, Five by Five.

    Reputations:
    351
    Messages:
    3,616
    Likes Received:
    1,825
    Trophy Points:
    231
    Don't put the carriage in front of the horse, there. 8700K has only leaked. Intel haven't officially said anything. If the 8700K were to achieve 5GHz, it would draw as much power as Ryzen Threadripper and that would be just sad.
     
    temp00876, hmscott and Papusan like this.
  4. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,701
    Messages:
    29,840
    Likes Received:
    59,615
    Trophy Points:
    931
    I expect high clocks on Intel will draw a lot of power. The sad part for me is the 6 core limit is the max you can get from Intel in laptops :( 6 core limits for Intel mainstream in desktops or laptops is a big Joke!!
     
    hmscott likes this.
  5. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    This is empirically false. Kaby was 14nm+, as was skylake-x. Both BW and Skylake used the same 14nm process, but after using it for awhile and the changes in architecture to better use the process, you see some change between the two.
    http://www.anandtech.com/show/10959...ration-kaby-lake-i7-7700k-i5-7600k-i3-7350k/2
    http://wccftech.com/intel-core-x-skylake-x-kaby-lake-x-cpu-review-roundup-x299-platform/
    https://www.overclock3d.net/news/cp...skylake-x_and_kaby_lake-x_series_processors/1

    I just haven't pulled out my other pc and don't feel like going through the hassle of comparing settings and numbers from the Eurogamer site yet (sorry @ole!!! ) (edit: they don't allow ad blockers, which I use multiple ones in different programs and firewalls, making it a hassle to know which one is blocking it (I can allow one, then the others block it)). I'm trusting they ran the benches at 1080p. Here is the article giving the information from hardware unboxed in a written form: https://www.techspot.com/review/1450-core-i7-vs-ryzen-5-hexa-core/ . Now, what I planned to compare is the ram and speeds, the graphics card used, when given, the driver used, the timings and power draw (looking for signs of throttling, like you mentioned and we saw with some x299 reviews, caused by proper overclocking, but not noticing the throttle on the VRM at first, but noting that if you don't setup your system with those settings, it would take more to get to the throttle), and anything else that popped into my mind to explain the differences. Beyond that, the possibilities are one of the two erred in procedure (happens), that it comes down to difference in settings (happens as people use different ways to overclock, which is what makes it an art for the last bit of performance), the silicon lottery, etc. I do still intend to dig into the article you posted, just haven't yet.
     
    Last edited: Aug 5, 2017
  6. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    about 14nm, imho regardless of what intel wishes to call them i think should be like 14nm broadwell, 14nm+ skylake, 14nm++ kabylake, 14nm+++ CFL. reason being skylake clearly has better silicon quality than broadwell chips, and kabylake is another level above skylake however not by much. so yea going for 14nm this is literally 4th yr and its 3rd optimization lmao so we better expect 6cores 5ghz.


    as for hardware unboxed 30 games review results skylake-x at 4.7ghz vs ryzen at 4ghz. ive asked few others on hardocp, overclock.net and few other places and many agree with me that results are too good to be true. the 7700k results on hardware unboxed for a few specific titles getting way less fps than other review sites, which ultimately might be coming down to settings in game thats used which is more intensive at 1080p as it is only a resolution, not a full details of game settings.

    with that being said, more agreed on GPU bound work instead of throttling. one did mention about the board hardware unbox used had some issue.
     
    hmscott likes this.
  7. don_svetlio

    don_svetlio In the Pipe, Five by Five.

    Reputations:
    351
    Messages:
    3,616
    Likes Received:
    1,825
    Trophy Points:
    231
    He has tested and validated with multiple boards - the simple answer is that the new Mesh design is not good - many people have also been reporting poor gaming performance on X299 - as for the 7700K, he tests it at stock, not 5GHz like most outlets since the people who buy it and actually run 5GHz are very far and few between. Unless you have any proof of wrong-doing, his benches are valid.
     
    hmscott likes this.
  8. don_svetlio

    don_svetlio In the Pipe, Five by Five.

    Reputations:
    351
    Messages:
    3,616
    Likes Received:
    1,825
    Trophy Points:
    231
    Also note that many other tech media do agree with Steeve (TechSpot) - notably Brian from Tech YES City and I think I've also seen Paul post compliments to his videos. In any case, he does 10-30 runs per benchmark so there is no margin for error - the fact of the matter is that due to TP(HU) being a small and independent outlet, I'd have a VERY hard time believing their results are skewed ESPECIALLY with Intel's LONG history of paying off and manipulating everyone.
     
    hmscott and jaug1337 like this.
  9. tgipier

    tgipier Notebook Deity

    Reputations:
    203
    Messages:
    1,603
    Likes Received:
    1,578
    Trophy Points:
    181
    EPYC is a very nice platform. It just need to refined and work out some of the problems. I honestly see it as a better alternative to Xeons for a lot of workloads at this point.
     
    ole!!! likes this.
  10. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    i dont think going along the thoughts of people being paid to schill for intel is the right train of thought here. because going by logic, tomshardware, anandtech and pcper would show similar results that gaming at 1440p would literally make the cpu difference not matter much at all and this was prior to any gaming optimization when ryzen just came out, except for very few titles like dota and AOTS which later got optimization for ryzen.

    at 1080p difference were pretty bigger and only real way to cover it is with faster ram. bios update at that point had little to do with fps. unless there are other *gaming optimization that i missed.

    so what really comes down to is that imho there are some settings needs to be verified and re-tested by other reviewers because majority of them is seeing different results. one should be able to tell right away. this is what i'd call not believe everything we see.
     
  11. don_svetlio

    don_svetlio In the Pipe, Five by Five.

    Reputations:
    351
    Messages:
    3,616
    Likes Received:
    1,825
    Trophy Points:
    231
    At one point in time Intel were paying Dell 1 Billion per Quarter to not sell AMD. Intel are several times larger now. Do you honestly think it's impossible to pay off someone?
     
    Papusan, ajc9988, ghegde and 2 others like this.
  12. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    what does that have to do with hardware unboxed's video review might be having issues? are you trying to say intel paid anandtech, pcper, tomshardware and probably 20 other reviewers?

    what @tilleroftheearth has said, piling on barely relevant facts doesnt change the fact what we were initially talking about here. as you can see I have been making effort trying to find out the issue at hand and really figuring this out, though debating this with you really isnt helping me much figuring this out lol.
     
    tilleroftheearth likes this.
  13. don_svetlio

    don_svetlio In the Pipe, Five by Five.

    Reputations:
    351
    Messages:
    3,616
    Likes Received:
    1,825
    Trophy Points:
    231
    Well, Jayz already said that Intel started sending him free stuff without him asking for anything. I wouldn't be surprised if Intel brokered some agreement for positive coverage.
     
    jaug1337 and hmscott like this.
  14. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    jay doesnt represent 30 other reviewers out there. no doubt some get free samples, to say all 30 are untruthful is just as bad as pointing out the only pro ryzen video is untruthful as well, your arugment makes little sense. now lets drop this paid/schilling for companies anti-competition. yes anti competition from intel has caused AMD to lose a lot of market share and no doubt they'd be in a much position now had that not happened in the past. what we were talking about werent about what ryzen's performance would be had intel not done that, we are talking about the performance RIGHT NOW with all thats happened in the past.
     
    tilleroftheearth likes this.
  15. don_svetlio

    don_svetlio In the Pipe, Five by Five.

    Reputations:
    351
    Messages:
    3,616
    Likes Received:
    1,825
    Trophy Points:
    231
    Intel have a 20-year-long track record of bribing and manipulating outlets. Sigh, believe what you will I guess. It doesn't change the fact that the Mesh design is a colossal failure...
     
  16. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    ryzen 1600x at $220-260 gives X amount of performance, had that price rated at $500 for 1600x it will STILL give X amount of performance, it simply wont matter. so talking about anti-competition OR pricing OR any barely relevant fact wont change when we are talking about performance. @don_svetlio @hmscott @jaug1337

    im hoping to get agreement from some more reasonable individuals such as @tilleroftheearth @ajc9988
    onto our main concern was, skylake-x 4.7ghz onpar or sub ryzen 4.0ghz gaming performance both at 6c, paired with some 7700k benchmark at 4.9ghz.

    I did mention gaming is not a primary concern for me many times yes? so why did i bother looking into this video and seem to care so much about it. reason being because I know fact of the matter is, CB15 is well optimized and games aren't, which also meant that majority of software i use and other softwares out there are just as likely to be unoptimized as games. so if games ARE being affected by new cache rework/ mesh re-design then chances are the software I use will also be affected regardless of what CB15 results show. just my train of thoughts make sense? lets move on.

    originally when ryzen first came out months ago, first reviews ryzen at 4ghz vs 7700k at 5ghz at 1080p showing poorer results for ryzen due to little gaming optimization, it made sense, newer arch, it had less IPC than 7700k, had 25% less clock than 7700k, at 1080p the frame rate was 30% some even 40% less vs 7700k. however those same reviews showed at 1440p fps difference between the two were much smaller almost to only 10% and reason being GPU being the bottleneck due to higher resolution and game settings. this also somehow related to what AdoreTV explained back and it was a hot topic as well; intel's CPU at 4c were constant at 100% usage while AMDs 8c only at 30-40% usage. which intel's CPU were being taken full advantage of here.

    so before we talk about hardwareunboxed review we gotta get this straight here. IPC has barely improved on intel's end and only way to crank out performance is overclocking frequency. though 100mhz doesnt net you 100% performance, it gives decent 80-90% boost in most software, there are no diminishing return here, EVEN if the software is not optimized. the diminishing return would be using a 2 core coded software on 8 core cpu theres no return at all, but cranking up the frequency helps almost ALL software out there. basically dimishing return is caused by a bottleneck somewhere and we have to find that reason.

    so seeing the hardware unbox video doesnt make sense, because overclocking frequency which mostly translate into 80%-90% performance doesnt hold true here. and the results for 7700k from when first ryzen review came out months ago are so much smaller in his video.

    it comes down to these things what I guessed happening in his video, ive added few more factors in:
    - intel CPU overclock throttling on skylake-x
    - GPU bound settings
    - Mesh/cache redesign added latency <---- @don_svetlio
    - Ryzen bios/ gaming optimization
    - Mobo issue/bios issue for skylake-x

    @don_svetlio what you claimed is #3 the absurd latency causing this issue ok, that was quoted from your comment in past few days.

    over to reddit " www.reddit.com/r/intel/comments/6rw549/hardware_unboxeds_7800x_review_is_flawed_and/" and this comment on frequencies made it clear.
    • Overclocking the 1600 by 25% increased performance by 24%.
    • Overclocking the 7700k by 17% increased performance by only 2%.
    • Overclocking the 7800X by 34% increased performance by only 6%.
    think about it for even a little bit and it becomes very clear something doesnt make sense here. first of frequency would relate to performance result in a software, games on the other hand uses GPU more so you will need to overclock the GPU, not so much on the CPU. ryzen jumped a good performance from overclock from the past till now might be due to bios update/optimization. where as intel's cpu clearly showed its not CPU overclocking that matters, its very obvious its likely GPU bounded and game settings.

    now onto your point about mesh/cache redesign with absurd latency, IF comparing just these two:

    -Overclocking the 1600 by 25% increased performance by 24%.
    -Overclocking the 7700k by 17% increased performance by only 2%.

    7700k doesnt have the new mesh/cache redsign here. if you wish to use it to explain for skylake-x's poor performance "Overclocking the 7800X by 34% increased performance by only 6%." how can you do the same for 7700k? your comment already doesnt make sense at first sight alright?

    so no, I dont really care about gaming performance that much because i dont plan to get that powerful of a GPU, but I dont need to fully understand everything to know bs when i see them. if i see something i dont understand right thing we should do is question it and find out why, not to use video reviews and then saying irelevant facts about anti-competiton etc. your argument makes no sense and I have explained to you why just now.

    finally i do know mesh cache redesign does affect existing software and needs optimization, not much than when ryzen first came out is it now. just how much that latency adds translate to poorer gaming performance? i dont fking have a clue. i'd have to have 2 system side by side and test out my own software to know.
     
    tilleroftheearth likes this.
  17. jaug1337

    jaug1337 de_dust2

    Reputations:
    2,135
    Messages:
    4,862
    Likes Received:
    1,031
    Trophy Points:
    231
    Intel has been the scum of the earth since they started scaling upwards the money latter. No denying that.

    nVIDIA followed suit once they realized they had the whole high-end market monopoly in their hands.

    Now, with all those things set aside, when looking at performance, Intel has been the clear winner for a while, and so has nVIDIA, to some extent.

    However, with the recent introduction of AMD's new Zen architecture, there is no denying there is a CLEAR distinction of choice for the end-user. Buy whatever you want, pay whatever you feel like.

    I agree, it is a small cluster going on atm. Intel is spitting CPU's faster than the assembly lines can make them lol


    Still waiting for Vega to show wtf they are all about. but that is a completely other thread :D
     
    Last edited by a moderator: Aug 6, 2017
    Papusan, ole!!! and hmscott like this.
  18. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Ryzen 5 1600/ 1600X vs Core i5 7600K Review: It's an AMD Win!
     
    don_svetlio likes this.
  19. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    yes, they have bad business practice. and they still have the best chip around in terms of performance at the moment. i also mention b4 that if amd could offer say 12 cores at 4.5ghz with very similar IPC i'd jump. those prices are hard to ignore, all its lacking is a bit of performance before i tilt.
     
  20. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    TANWare and hmscott like this.
  21. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    At least AMD has an excuse: new chipset, new motherboards, new CPU's, new NVME support, it's all new. So some features will come later, likely at no extra cost unlike Intel.

    Intel doesn't have any excuses yet wants to charge you more for bootable RAID0, or use Intel SSD's only.

    ZIPPYZION 3 days ago
    "Intel's NVMe RAID support isn't all flowers and lollipops anyways. You have to pay to unlock features built into the motherboard for anything more than RAID 0, you can't make it a bootable array unless you use Intel SSDs, and to cap it off it is Skylake-X only. Get your wallets out, because Intel will make you pay, a lot, for this decidedly premium feature. I suspect that once AMD has it figured out that they won't be charging people extra for the feature."

    Lots of other great comments (36 right now).

    Hopefully Micron will release their "Optane" competitor so we can remain Intel-free :)
     
    Last edited: Aug 7, 2017
    TANWare and don_svetlio like this.
  22. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    TBH, NVMe is more than fast enough for a boot device. Now RAID 1 etc. I can see a bigger interest in.
     
    ajc9988 and hmscott like this.
  23. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yup, and like @ole!!! said Optane - and Micron - SSD's will be very fast all by themselves.

    There must be a PCIE board x16 that breaks out 4x PCIE to 4 M.2's, and if it's a "Rocket" board it'll have it's own hardware RAID.

    Given the poor throughput limitation of the mobile chipset, I don't think anyone has seen full performance RAID0 x2 PCIE NVMe x4 yet, so I guess it would be fun to see 3x RAID0 PCIE x4 NVMe in a x399 just to see if real 3x throughput would put performance over the top enough to be noticeable again :)
     
  24. don_svetlio

    don_svetlio In the Pipe, Five by Five.

    Reputations:
    351
    Messages:
    3,616
    Likes Received:
    1,825
    Trophy Points:
    231
    Keep in mind - the faster the SSD, the more cooling you need. As Samsung and Razer have demonstrated, SSDs are, indeed, capable of overheating and causing BSODs.
     
    Papusan and hmscott like this.
  25. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yup, that's one of the reasons I warn people off of NVME drives instead of SATA, besides being about 1/2 the cost for the same storage the SATA controller runs cooler.

    And, I got 1.6GB/sec with a 4x 512GB SATA RAID0, which was nice. It still got pretty hot though, since the layout put the 1st 3 M.2 drives overlapping, so they heated each other up.

    Even the new vertical mounting for M.2 drives in the new MB's only helps a little, those little controllers have no active cooling and get sooo hot.

    I think we need a new design / layout for those things that includes active cooling, and Liquid Metal TIM ;)
     
    Papusan and don_svetlio like this.
  26. don_svetlio

    don_svetlio In the Pipe, Five by Five.

    Reputations:
    351
    Messages:
    3,616
    Likes Received:
    1,825
    Trophy Points:
    231
    Personally, after owning both an 850 EVO 250GB and a SM951 250GB - I can, with absolute certainty, say that I cannot feel or see a difference in real world usage. Thus, I would always opt for a larger/cheaper SATA 3 drive over NVMe.
     
    Papusan, jaug1337 and hmscott like this.
  27. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    I finally compared the results and you are missing that in game settings explain EVERYTHING, just about, comparing the results from Hardware Unboxed and Eurogamer. First, lets start with The Witcher 3. Eurogamer had hairworks off, TechReport did not. Second, ROTR matched between both sites, by 0.3 frames. Third, Ashes is way higher from techreport, but only MSAAx2 was used, instead of higher, which likely explains the DOUBLE in performance for frames. Far Cry Primal, you see 5 frames faster at TechReport, but you also have 300MHz faster on the CPU. No information on SMAA being used or not by Eurogamer. On The Division, you see 139 vs 135.9, with techreport having the higher frame rate. So, EVERYTHING is easily explained and there IS NO BIG DIFFERENCE IN THEIR NUMBERS, except Ashes, which is clearly a settings issue. So you have been telling falsehoods this past couple pages. Also, Eurogamer likely, when overclocking the 3000MHz ram to 3200MHz probably tightened timings. But considering they had a lower overall overclock, nothing of note except some on mesh speed. Meanwhile, any further variance beyond what is stated above is silicon variance. DONE!

    As to what you think on process doesn't mean you are right. Skylake isn't better silicon (raw materials), it is what you get when you refine yields and use the process for long enough. In fact, if you add the time up for Broadwell not being delivered and us getting the 4790K, plus BW, then to Skylake, you are looking at 3+ years of refinement to get skylake. AMD took two years on Ryzen, for comparative purposes. Then, they went to a new process. That was supposed to be it. But, because they failed to get 10nm working, they shoved another refinement at us, using coffeelake. Now, if you said all skylake chips were made in Puerto Rico and that the silicon and copper from there is used across the line, then better silicon makes sense. But you are actually talking about natural refinement of a process that took 3 years to refine. So what is there not to understand?

    @don_svetlio - read my above explanation. It isn't stock.

    As to Ashes, you don't like that AMD closed the gap. Few have tested and published numbers on Ryzen retest. Eurogamer said they are currently retesting and included NO NUMBERS FOR RYZEN. So, you referencing these other places makes no sense here. Also, Tom's is biased toward Intel, just the main CPU tester came around with public pressure recently and had a scathing review of the 7900X. That doesn't remove all instances of bias. PCPer, when reviewing mesh latency, compared it to the 1800X latency running ram at 2133MHz, when they tested for ram speed on latency with the 1600X and showed the latency drop with the speed increase, but tested Intel with faster ram and showed some lowering of latency using faster ram. They didn't even link the 1600X article in that article. They were also the primary one to bash TR in the unboxing video before any testing. Anandtech has a lean in bias also, but really doesn't usually seem as bad as the other two. So what is your point here?
     
    Last edited: Aug 7, 2017
  28. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Actually, all reviewers request hardware from the vendors for review. All vendors send free hardware to the reviewers to be reviewed. The problem comes in asking for a specific product, then having the flagship shoved down your throat. THAT is what Jayz was talking about. He wanted to review hardware that more people would buy and show the performance. Intel said take the 10-core instead, that way to use it as an advertisement. ALL REVIEWS ARE ADVERTISEMENTS!!! So there is nothing nefarious there. It is when companies withhold hardware after a bad review that they are manipulating things, which Intel HAS done that in the past. So when it comes to reviews, you both need to chill. Yes, Intel has done some shady stuff in the past. They are doing a full court press now. Paying for comments is also becoming more practiced by all companies. So let's get past that and focus on my previous post showing ole misrepresented results.

    I already showed this, but here is a PCIex16 card with 4xNVME running at huge speeds:
    http://www.highpoint-tech.com/USA_new/CS-product_nvme.htm

    Edit: @ole!!! - Also the Eurogamer article stated that micro-stutter happened with TB3.0 turned on. Just saying.
     
    Last edited: Aug 7, 2017
    Papusan and hmscott like this.
  29. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Agreed 100% ole!!!,

    Gaming performance (cpu-wise) isn't a concern for anyone below competition level. Especially when 'garbage' BGA notebooks can satisfy the vast majority of gamers out there...

    On the other hand; real performance (aka; productivity) is still firmly in the platform that offers (today) not the best of one or two features/aspects - but one that has the best balance of those features too. Today, Intel still has the formula for productivity 'correct'. And yeah; they charge accordingly.

    Some of those features for me are:
    • Platform stability
    • Feature usability
    • Software stability and optimizations
    • High 'base' clock speeds (stock)
    • High turbo clock speeds
    • Fast RAM
    • Fast Storage
    Price, power used (on desktop systems) and other inane factors don't come into consideration here (I've posted a spreadsheet previously which showed why - post was deleted...) when maximum productivity over time is the goal.

    Why are the above factors 'inane'? Because they simply don't vary enough between platforms to change a logical buying decision into an emotional one like 'I hate Intel so much I'll buy anything AMD offers - even at the same platform price and at less, overall, real world performance for most workstation (single user) workloads'.

    The other comments of how Intel overcharges, under delivers (for a decade... sigh...), monopolizes and bullies it's competitors don't change that fact above one iota. I don't marry the company, I don't wear their t-shirts to promote them; I just use the products to advance me and my business(es) as best I can. These are the red herrings thrown when the performance from competitiors isn't up to snuff.

    The above is not to say that Intel can and/or will solve the issues that plague it's latest offerings. But just like AMD has had over half a year to improve things on it's new platforms; Intel should be given the same chance by anyone considering a long term relationship with either.

    In the end; what Intel offers right now is still significantly better than what AMD can for me, my clients and our workflows (workstation class - single user...).

    I don't care who wins the productivity battle - I will always have the platform that offers the best balance for my real world usage (at that time) - not a theoretical superiority of a mythical dominance coming some time in the future...

    When that future comes (i.e. O/S, software and workflows I use actually take advantage of >8 cores...); I'll be there with open arms to welcome it - and we've already been waiting 20 years for that future. In the here and now; productivity is spelled with an 'I' and with the information we have today (objectively; not clickbait youtube videos riding on the 'I hate Intel' bandwagon) I have no doubt that will continue well past when that >8 core future O/S, software and workflows gets here too.

    But of course; logic doesn't apply here. It's not like we're simply talking about tools used to put roofs over our heads and food in our stomachs. :rolleyes:


     
    Papusan and ole!!! like this.
  30. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    so in regards to the review results, from what your findings, basically HU had higher settings? hairwork would need more GPU related stuff right so that would mean regardless of clock frequency on CPU side, its more important on the GPU side to get higher fps when 1080p ultra with AA/hairwork w/e its on is very similar or close to 1440p on high (the amount of GPU power needed).

    regards to the process thing, i dont really care tbh, skylake chips is overall better than broadwell by quite a margin. unless intel had made a first batch of 14nm called broadwell and the better of that first batch as skylake then later release them on 2nd yr, i'd call them 14nm+.


    yes, all reviewers are advertisement of some kind, though not all. some end up buying their own product to review if they dont get samples though rare. what @don_svetlio was saying is that all are likely payed by intel which imo is just silly because they are also getting samples from AMD just as well. honestly it made little sense and its a poor argument as theres no definite proof. if HU's review was better for AMD he simply took the fact that it was better so HU must not be part of intel's paid scheme, which is quite funny cause they also got samples from both camp.

    as for TB3.0 i'd have to find out myself i got bunch of scenarios, for example.
    - 8 cores at 4ghz, 2 cores at 4.7ghz TB 3.0.
    - 8 cores at 4.7ghz
    - TB 3.0 settings for specific software that underload uses both multi thread AND single thread (firefox..)
    - a 2nd software that strictly uses only single thread performance.

    basically 2 type of clocks and 2 type of software, see how quick TB3.0 adjust itself to the faster clock and see how much it really benefits in ST scenarios.
     
    tilleroftheearth likes this.
  31. don_svetlio

    don_svetlio In the Pipe, Five by Five.

    Reputations:
    351
    Messages:
    3,616
    Likes Received:
    1,825
    Trophy Points:
    231
    Yet only Intel have been fined several billion for anti-competitive business practices, right? :)
     
    hmscott likes this.
  32. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    irelevant right :)
     
    tilleroftheearth likes this.
  33. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
  34. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
  35. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    First, TechSpot was about 15-20FPS slower, while the Eurogamer article said they did not use hairworks. Considering TechSpot's score was that much lower, it is likely that TechSpot (HU is ran by an editor at TechSpot: https://www.techspot.com/community/staff/steve.96585/) had hairworks on, thereby giving a performance hit. You keep bringing up 1440p when it is irrelevant to comparing the two articles. Stop!

    As to what you call it, fine, ignore what Intel and the ENTIRE INDUSTRY calls it, which a process is the fabrication process, all so you can keep it straight, even though it is ignorant as to what is actually used in the industry. Just shows your lack of knowledge on the matter.

    This isn't for debate. I said directly that everyone gets samples. I was clearing things up and siding, in part with you, then you go on this BS. Actually, Nvidia and Intel have a long history of blacklisting certain reviewers, so please do not speak on it like that. The question is whether the company can then pay for the components out of their own pocket, while being late to the party on releasing reviews. This is why I often only look at a couple trusted reviewers, although I'm now watching more than I used to on youtube (articles are the same, I search for content, filter and compare).

    As for TB3.0, I'm saying if you are using it for gaming, you'll have issues. It may not be noticed on other programs. Then again, this may just be signs of bugs in the microcode that need updates later (new platform, common and shouldn't take away from value).

    I'm not getting into the BS back and forth on speed, threading performance, etc. again. We've done this dance. You don't need to keep repeating yourself, nor do I. EVERYONE OF US SHOULD JUST LET THAT DIE, especially since the difference in scores is now explained.

    Edit:
    So this explains his process. This was from March 7.
     
    Last edited: Aug 7, 2017
    hmscott likes this.
  36. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    no, its not irelevant. 1440p is a resolution, HU using 1080p. however you can bet that 1080p at ultra settings will likely require much more GPU process power than 1440p on low settings. dont be a tool and divide them as 1080p and 1440p, game settings clearly matters.

    theres a clear GPU bottleneck issue at hand here from HU's review, which explains why overclocking CPU wont matter much, no matter how much one wants to defend them.
     
  37. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    No one games on 1440p at low settings over 1080p ultra, especially using a 1080 or a Ti. It's like saying go and test at 720p. It's just absurd! Everyone tested using roughly similar settings and of the two articles in question, both used 1080p. What you are doing is trying to say let's do something so far out of real world use to show my side wins bigger. It is ignorant. We saw this at Ryzen release with Intel recommending testing at 720p and 1080p to remove the bottleneck. Now you are saying to use 1440p (a higher definition) at unrealistically low settings for a person that purchases a 1440p display to show the same thing. Give it a break!

    You don't like the results so you call it fake, even though the numbers AGREED with the article you cited (Assassin's creed wasn't tested, AA was different on AOTS, you had a lower performance on Witcher 3, likely hairworks or a shadow level setting, and the rest are within the margin and can vary by overclock and testing). So you make up a controversy, then you hammer it, then change it from the original statement to this 1440p stuff, all to show that the software is optimized in a certain way. Why? You are not doing yourself favors here on credibility AT ALL. Instead, you seam hurt over one holistic review whose numbers undermine a personal purchase choice. WHY?
     
    don_svetlio likes this.
  38. don_svetlio

    don_svetlio In the Pipe, Five by Five.

    Reputations:
    351
    Messages:
    3,616
    Likes Received:
    1,825
    Trophy Points:
    231
    Becuase we all suffer buyer's remorse when something better comes out several months after we buy our stuff. I was the same with the GL502VM coming out 4 months after getting the VT model. But we learn to live with it.
     
  39. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    i see the name calling started to happen lol. never once i call it fake, i said there are issues. of course no one games at 1440p low settings but thats not the point im trying to make. it seems you are starting to heat up again so i wont bother repeat myself further repeatly.. sigh.

    when you are calm, you talk just fine but when you get a bit offended you start lashing out personal attacks.
     
    tilleroftheearth likes this.
  40. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Here, it isn't better! It is equal, depending on the program and on average. Saying better isn't true, unless you need the multithreading capabilities, and these programs do not show it stretch its legs ( @tilleroftheearth hounds on that, but we also don't really have benchmarks, except dated ones optimized for Intel, to test otherwise, which is why specperf06 is the most common test still out there in that regard, which even Intel people complain about it not fully using all the new features of Intel, although the 2012 and later editions have not got widespread adoption or press for this type of benching and is usually sold piecemeal to companies to test for similar software used by those companies, not used for reviews by the press, usually). As to gaming, it is starting to be a push between the two. Instead, you are antagonizing him, making him reach further, which is just causing more drama which is unneeded.

    Now, having the new P770DM come out 6 months after my ZM was upsetting, but this is neither here nor there. He made his decision. We should stop him from spreading false information or making unrealistic statements to prove something, but other than that, let him be.
     
    don_svetlio likes this.
  41. tgipier

    tgipier Notebook Deity

    Reputations:
    203
    Messages:
    1,603
    Likes Received:
    1,578
    Trophy Points:
    181
    temp00876 and tilleroftheearth like this.
  42. Rage Set

    Rage Set A Fusioner of Technologies

    Reputations:
    1,611
    Messages:
    1,680
    Likes Received:
    5,059
    Trophy Points:
    531
    What's up with Intel marketing X299 with 'upto' 68 PCI-E lanes now? I'd think Intel would know that the majority of the people buying into the X299 platform would know better (i.e. they aren't the usual mainstream computing buyer). 44 PCI-E lanes on the CPU and 24 through the chipset connected by a x4 lane DMI are not 68 effective lanes....I mean come on now Intel.
     
    Papusan, hmscott, jaug1337 and 2 others like this.
  43. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    ajc9988 and Rage Set like this.
  44. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    To those out there as new, or old, AMD fan boys. Forget asking a die hard Intel fan boy to come on over. Despite what Intel has done to the industry their present CPU line does works faster in some specific workloads work flows. This is few and far between and in some of those cases then out weighted by other factors.

    As far as gaming you need a GTX 1080 TI most of the time to realize a benefit. For either case when the CPU does give a slight edge switching benchmarks to 1440P makes the GPU more of a bottle neck and levels the scores.

    Even back in the day where AMD clearly held the performance crown your Intel peple could not be swayed. So do not be down heartened.
     
    ajc9988 likes this.
  45. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    Agreed! :)


    Again; not an Intel fan boy (and I don't think ole!!! is either). I am a fan of performance and specifically; actual increases in productivity.

    Gaming as has been pointed out over and over is not a current real world test to find out what processor/platform is better. Just like you hint at yourself; so gaming 'performance' is discounted heavily by me and anyone else that doesn't care about that workflow to make $$$$ with.

    Back in the day when AMD did clearly lead in performance... I switched for 24 hours and couldn't even install the software I relied upon at that time. Instant return and lost customer.

    But I am not one to hold a grudge - no, each time AMD offers something - I compare it to what I was running at that time and it never compared (even today; as the links I have provided previously so easily confirm). So, I continue to use the best at each time I compared. That just happened to be Intel.

    Doesn't make me a fan boy. Just makes me smart.

    Intel doesn't lead in 'some' specific workloads/workflows. It leads in the majority of workstation class (i.e. single user) workloads which represents over 90% (guess) of the posters here.

    So, the quote below could just as easily be said about AMD fan boys with similar workstation class workloads (but yeah; a few, including yourself TANWare, want the AMD platforms for the HCC uses they'll likely excel at - and I've already agreed a million times that is 99.9% the correct choice for those types of users here). :)


     
    ole!!! likes this.
  46. TANWare

    TANWare Just This Side of Senile, I think. Super Moderator

    Reputations:
    2,548
    Messages:
    9,585
    Likes Received:
    4,997
    Trophy Points:
    431
    Now Cyrix, back in the day had issues but I don't remember AMD having one. Other than the software. Even Geoworks Ensemble seemed fine with the AMD but I had a devil of a time with the Cyrix 586.

    The thing is most of us, including me, spend 99.9% of our timer in workloads of single to dual threads. At one time this could make a huge difference in your workflow all day long. Systems were just slow and overburdened all the time. We though have moved well beyond that. With fast IPC's and even SSD's we are at single task super speed. My 9340xm is so fast at opening things up and getting these tasks done I need nothing faster, it is just a monster for me.

    While yes Intel has gotten even faster I do not need it. I do not need a 1/2 of a blink of an eye. Now where 95% of the public is experiencing a loaded system we do want faster. Be that through Intel or AMD, or even get Cyrix back in the game for all I care. Myself, for the thousand or so, they can have my 1/2 blink of an eye.

    Edit; Links
    http://www.pcworld.com/article/3212...atesand-threadripper-is-faster-sometimes.html
    https://www.forbes.com/sites/antony...8-core-beasts-clocked-at-4-4ghz/#79f231ef1aed
     
    Last edited: Aug 8, 2017
    Rage Set likes this.
  47. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Notice how pcworld tried saying Intel uses less energy. That is a laugh!!! Intel already showed its TDP means nothing! Also, we need to see the true all core oc for comparison on both.

    Sent from my SM-G900P using Tapatalk
     
  48. Support.2@XOTIC PC

    Support.2@XOTIC PC Company Representative

    Reputations:
    486
    Messages:
    3,148
    Likes Received:
    3,490
    Trophy Points:
    331
    This is something that makes me try and qualify the statement. Clock is king...for now. Whether it will be in the future depends.
     
    Papusan and ajc9988 like this.
  49. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    because they got nothing to combat AMD's true 64 CPU lanes.
     
    ajc9988 likes this.
  50. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
    for legacy software or softwares that those companies too lazy to optimize, IPC + clock will be yes.

    for majority of software and users that holds true. majority being 99.99999% probably.. lmao.

    when SSDs and cpu get that fast, it all really comes down to software and the workload to see real benefit, most of the time we need lengthy workload to justify the gain otherwise its as you said, 1-2 sec faster rofl really not worth paying the premium for.

    kinda like internet speed, internet 50 mbps download average vs internet 1G. only when downloading large amount of data will 1G show that difference.
     
    Last edited: Aug 8, 2017
    ajc9988 likes this.
← Previous pageNext page →