OK fighting has to STOP! I haven't deleted content yet but continue with any more fighting off topic posts and watch pages of this thread disappear, that is when you guys eventually get back from the forum bans!
-
-
It seems this Passmark active CPU is being misrepresented in the media. Most people do not realize that users will load and run this in the early life of the system. Both to see what they have and maybe tweak a bit. After the early days most will not run this again or at least not very often.
So what this then gives us is a peak at new systems entering the market space, from OEM to home built. While these numbers may eventually effect total market share, that will take quite some time. It does show though AMD is having an impact.
So again when OEM AMD systems hit the shelves market saturation will start to increase. The smart investor out there knows these numbers are just a preliminary look at what AMD may be capable of. The true battle for market share though has yet to begin.
http://www.mcvuk.com/news/read/amd-is-finally-gaining-ground-on-intel/0184305Rage Set and tilleroftheearth like this. -
tilleroftheearth Wisdom listens quietly...
I like how the article's author tempers the data shown.
ole!!! likes this. -
Yeah, he tempers it but most do not. Even so the majority of people will not understand about early benchmarking and then those people tend not to continue benchmarking forever.
ajc9988 and tilleroftheearth like this. -
btw did you saw the link i linked you to AMD's system for lower 4k storage performance? -
By temper, I think they might be referring to the premise that the writer of the article is not exaggerating the numbers but is trying to remain objective (unlike many other outlets).
tilleroftheearth likes this. -
As far as temper, we mean he is not taking the results as an absolute and immediate change to the market share. Unlike others that look at these numbers as the absolute real time value of market share.tilleroftheearth likes this. -
tilleroftheearth Wisdom listens quietly...
ole!!!, in this thread? I think I missed that.
I'd be interested in that link too...
-
http://www.tweaktown.com/articles/8073/amd-ryzen-ssd-storage-performance-preview/index4.html
its mostly 4k random performance thats dropping off for w/e reason might be lack of driver or support. this, and also lower frequency/ipc on the cpu are the biggest reasons i can't go AMD. no point of getting optane storage and put it onto AMD system only to lose a chunk of the performance im paying for optane.tilleroftheearth likes this. -
Maybe find some newer tests, with the most recent BIOS updates.
Wow.Rage Set, Papusan, TANWare and 1 other person like this. -
Also, you could frame it as wanting to have comparison points for TR, to see if changes to I/O in stepping had any impact.
Sent from my SM-G900P using Tapatalkhmscott likes this. -
https://www.semiwiki.com/forum/cont...alfoundries-discloses-7nm-process-detail.html
Upcoming process comparison.
Sent from my SM-G900P using Tapatalk -
if wish to be that detailed, that is the correct way of doing things. however reviewers dont often get samples and/or update. that rig used for testing may no longer be in the reviewer's possession as it is passed around for other testing.
now since @ajc9988 suggestion certainly involves a lot more work, i'll go the easier method. the old amd chipset/cpu even with optimized bios, has rather poor performance on storage devices versus older intel platform. ryzen is definitely better but 4k still lacking. looks like we both aren't going to ask them to update bios and retest it, i guess the discussion would end here with the conclusion that, intel platform has stronger storage performance in 4k vs ryzen.
i mean, clearly as a trend we can see faster cpu would result in better storage performance of the same type of storage device, assuming the driver/software are the same. date this all the way back from sandy to haswell for intel, storage performance hasnt changed much, and some newer IRST driver actually produce LESS performance in single drive device rather than raid arrays. lets not kid ourselves and think bios update can do all the magic here, this could very well be a windows problem not being optimized or simply due to ryzen's design.
nothing to frame here tbh, this is what historically AMD platform have shown as far as when comes to storage.Last edited: Jul 5, 2017tilleroftheearth likes this. -
also, if you guys are really down for TR, we can compare storage performance once i finish my intel rig
certainly by that time you'll have the latest bios and drivers.
edit: i rechecked the test rig and thought 4k performance reduction might be because of the CCX latency issue and if its somehow related to ram, the ram speed is 3000mhz so maybe problem lies else where. the performance for AMD's chipset x370 prob just isnt that good compare to intel's chipset.
now assuming if it really is the chipset issue, if we were to take the storage devices connected directly via CPU lanes rather than through chipset, we might see closer results, all these are just speculating and from logic afa i can understand things.tilleroftheearth likes this. -
Second, you do have a point on the chipset. But, the chipset isn't made by AMD, supposedly. Now, if IF is used for the connections to the chipset, then IF latency is in play.
Third, programs have been optimized since then to take advantage of the ccx structure. This means if it is scheduling, that may have been resolved.
Sent from my SM-G900P using Tapatalk -
LoL, you are using raid then forget 4K, your stripes are guarantied to much larger than 8K so 4K W/R is useless. If you are using a 128K Stripe you would only worry about 64K on a dual drive Raid0 or 32K on a four drive, ETC..
With an Optane direct to PCIe I think things are different there as well. -
however on the other note, i'd be interested to see intel's VROC and how it fairs on against traditional raid 0 performance. afaik 4k performance dont stack in raid unless help with writecache but i wonder if theres something similar to writecache option in VROC vs raid via chipset and IRST.tilleroftheearth likes this. -
Second, as far as software optimizations go, CCX can still be better utilized if not addressed. NUMA aware software usually has the easiest time, but as we saw with ROTR and others, changing the size and ordering of the schedule can give large performance boost. This means this can still be an issue.
This, you mischaracterize the CCX hit on latency between the cores of different CCXs. It, at 3200MHz ram, is just 10-20ns slower than the mesh. We hammered that out forever.
Third, it is when a person improperly characterized things consistently that I take issue with. Even here, you are trying to rule out what amounts to software optimizations by pointing to the latency between CCX, which is relatively fixed and is an architectural feature. In other words, you conflate a physical feature for something that is handled in software. Considering if properly scheduled, you'd have less cross-CCX talk and better utilization of the cache structure (which even Intel skylake-X needs optimized on), there is more than what you lead on.
Sent from my SM-G900P using Tapatalk -
. honestly speaking, i use to care and want to prove im right all the time, now i just dont care anymore lol. I still believe what i learned is correct and if you dont agree with it, thats your opinion. quite frankly if you're wrong, then i dont need to let you know where its wrong and you can stay that way, if im wrong, i'll correct my basic understanding and re understand it when the time comes.
now back onto topic, i gave you the reason why bios, firmware/ software /driver isnt at all perfect and can't fix everything. AMD platform is one of those reason, bios wont do much however the chipset itself might be the factoring point here why its 4k performance is much poorer than intel's.Last edited: Jul 5, 2017tilleroftheearth likes this. -
When talking about the inter-ccx communication, we are talking multiple things. One is IF and the latency surrounding it. Second is firmware changes, which one change increased support of higher memory speeds, thereby increasing IF through put, which can effect the results. Software can change how the program uses the CPU during the read and write, especially in utilizing the cache system, which effects reported speeds to a degree. Drivers are obvious as well, and one cannot say that the older drivers work better if it doesn't recognize our utilize the scheduling on the new hardware.
This is why I try to explicate so that it won't become simplistic in one saying things as an absolute.
Sent from my SM-G900P using Tapatalk -
tilleroftheearth Wisdom listens quietly...
ole!!!, thanks for digging that up. I did miss that.
Your points are spot on and understood by most here - but I already see you're not worried...
ole!!! likes this. -
W/o more recent tests to comfirm we can't say for sure Intel is faster. TBH, the 4K W/R Q4 is not so distinctly off from each other that you will notice this real time on a day by day basis. As I had mentioned also forcing raid0 on a larger stripe forces the elimination of smaller W/R's so the supposed difference is reduced significantly.
-
https://www.pcper.com/reviews/Stora...e-RAID-Tested-Why-So-Snappy/Latency-Distribut
Edit: specifically, for the price expense of large sized optanes, which may take up a slot of the PCIe slots or that you use as a cache drive, you could get a raid card supporting NVMe M.2, then populate it with moderate priced NVMe drives, have them in raid, and still have 48 more lanes to use for whatever you want.
Edit 2: "One additional point those with a keen eye would have noted is that in both cases (writes and reads, but reads less noticable due to the log scale), shifting from a single SSD to a RAID results in a ~6μs additional delay to *all* IO requests. This is the overhead cost of Intel's RAID implementation, and it represents things like the time taken to translate the IO addresses to the array. This added delay does have an impact at very low queue depths, but it is almost immediately outweighed by the increased 'acceleration' of the array as the queue depth climbs just a single point to overcome the effect."Last edited: Jul 5, 2017hmscott likes this. -
the article shows, raid 0 will reduce your QDepth 1-2 latency due to overhead for 4k read/write but will increase QDepth 3-4 onwards you start to notice the difference. in simple terms, you only feel faster once you hit QDepth 3-4 versus a single NVMe device. when you're in QDepth 1-2, its actually slower, where by optane technology can cover up that difference even with raid versus traditional NVMe. how I replied to Tanware just above as I mostly care about QDepth 1-2, raid via chipset will increase the latency due to overhead & hitting QD3-4 constantly isn't common nor easy.
however I wonder what will happen raiding with VROC, which btw is only available on Intel's latest x299, i am extremely interested to find out about its raid performance along with turbo boost 3.0 to see where it can take us. according to Allyn he mentioned the overhead would simply disappear that means if we were to raid it would benefit more than raid via chipset.tilleroftheearth likes this. -
I mentioned QD-4 as those links used the benchmark set at QD-4.
-
Sent from my SM-G900P using Tapatalk -
-
Sent from my SM-G900P using Tapatalk -
They tested not at QD-1 but QD-4
http://www.tweaktown.com/image.php?...yzen-ssd-storage-performance-preview_full.pngajc9988 likes this. -
Last edited by a moderator: Jul 6, 2017tilleroftheearth likes this. -
Fighting has to stop. Name calling, slandering etc.., stop it all.
We will end this with the 2 points I can get from this.
1.) As of release the Ryzen is slower. Apparently the x370 is not mature as of the release with SATA.
2.) with the chipset out of the equation and a proper raid pcie card things do equal out a bit more.ajc9988 likes this. -
Get out your salt shakers, without any real news even Wccftech is being left to rumors;
http://wccftech.com/amd-ryzen-threadripper-1950x-cpu-performance-benchmarks-leak/
Just a heads up, the geekbench shows as an engineering sample (I think the one used in the early gaming demo's) and is a stepping 1, not stepping 2 and still shows as Summit Ridge. Also southbridge as "AMD X370 51"Last edited: Jul 6, 2017ajc9988 likes this. -
Sent from my SM-G900P using Tapatalk -
a new update boys! this is what happens if you have a bunch of competent people who knows what they're doing, working and pour their experiences together and you get just pure knowledge and facts and how things work. love it
@unclewebb though they guy didnt mention throttlestop to detect throttle he said coretemp, which is also made by you!! without your core temp and set affinity to higher than other software we would be throttling nonstop.Last edited: Jul 6, 2017tilleroftheearth and hmscott like this. -
Last edited: Jul 6, 2017hmscott likes this.
-
TR is UNRELEASED as of yet.
Until we get a final release, everything is up in the air. -
I did appreciate the tip of the hat to the Silicon Lottery group considering he is with CaseKing and they are competitor binners. Class!!! -
prime95 small FFT FTW
Answered: Intel used the toothpaste to "pre-gimp" the CPU so it would throttle first, so the under designed consumer hardware would survive, and the VRM's would run cool.
Also note the BIOS power setting, from Auto to 140%. At Auto it's so power limited you won't get full performance, and the VRM's might be safe - but, if a tuner gets a hold of the X299 and tries to get the top performance, the power section is under designed for what the CPU's will allow (delidded).
All those built-in limit's are chickenshot methods of protecting a poor design, cheap design, and stealing the top off the value of the i9 CPU's at least from 10 core and higher.
It would be good to see if the lower count core CPU's are also performance limited at defaults, and if removing the limits starts that same too high load for the design.Last edited: Jul 6, 2017 -
Sent from my SM-G900P using Tapatalk
Edit: after reviewing the entire video, he brings up a great point I get to stress in stability testing: ALWAYS USE MULTIPLE PROGRAMS TO STRESS TEST!!! Notice when he said OCCT would fail and P95 would chug along. That is very important for people to understand that different programs load the CPU differently.
So better reviews in the future coming up!
Also a class act on the collaboration.
Finally, as they mentioned, and many of us laptop overclockers know, switching out thermal pads, adjusting contact, etc., can give better results. Now, if you plan on a water chiller (like me) or phase change (like @Papusan), you'll need to cool then down. I plan on a water cold plate to try to get them around 25C or lower for cleaner power delivery, but even keeping them in the 60s-70s will be fine.Last edited: Jul 6, 2017Papusan likes this. -
Maybe now more reviewers will check power temps and design more carefully moving forward.ajc9988 likes this. -
Now, AMD is known for ramping up current needs above stated clocks. It just grows exponentially, historically, of course variant by part. So, it will be interesting to see actual CPU draw on TR.
But, we'll know more in a month!and always watching for reviews!
Sent from my SM-G900P using Tapatalk -
-
@hmscott the toothpaste is very disappointing i think many that are getting skylake-x would agree with me however all these high power consumption in benchmark test we gotta keep in mind that, we dont get computer for those purposes at all. if in the range of 350w to 400w with the usage Im using at which still has AVX workload, all threads say to 4.7-4.8ghz, I will only be hitting that wattage maybe once a year, or almost never.
they are clearly doing worst case scenario not like i dont understand how it works, but thats targeting high end enthusiasts that like to bench with heavy benchmarks, i dont do those i find them pointless. I may do cinebench or XTU once or twice to test temp and performance to see if throttling but thats about it, to see if im getting the performance i should be getting.
if delided, this cpu will run a bit hotter than broadwell, thats about it. -
In these tests the Kabylake-X 7740x / 7640x vs Kabylake 7700k / 7600k show the new X299 systems drawing way more power to deliver the same (or less!!) performance as the x270. 20% more at idle, and 40% more at load!
Intel Krappy Lake-X: Core i7-7740X & Core i5-7640X
Last edited: Jul 11, 2017 -
Everything except the cable was replicated. No one gave a darn about the cable because the VRM are throttling! Priorities!
Sent from my SM-G900P using Tapatalk
Edit: also, with 350-400W on the power cable, the heat measured makes more sense. But this is measured by either a temp gun, a camera that picks up the heat differential for later analyzing, or direct temp probes. I'm doubting the other reviewers were focused on testing the cord when they needed help drawing the current by the CPU.Last edited: Jul 7, 2017hmscott likes this. -
-
Sent from my SM-G900P using Tapatalkhmscott likes this. -
Well, since no extra pci-e lanes, no extra clocks and a worse TDP, it should be expected the same or worse performance.
-
on the other hand, 7900x all sold out and i miss it by 1 day now all gone wtf silicon lottery. doesnt look like anything above 4.8ghz. waiting for a 5ghz 7900xLast edited by a moderator: Jul 7, 2017 -
Ryzen vs i7 (Mainstream); Threadripper vs i9 (HEDT); X299 vs X399/TRX40; Xeon vs Epyc
Discussion in 'Hardware Components and Aftermarket Upgrades' started by ajc9988, Jun 7, 2017.