you gonna upgrade to zen2 TR when it comes out for 32 cores? or just 16 to 16c upgrade
-
depends on the performance stats id say. but from what is known thus far based on the zen 2 arch and 32 core epyc, the 32 core TR will likely be a BEAST to behold
plus, going 16 core TR doesnt make much sense if u can just get the 3950x mainstream 16 core sku.
-
If the rumor on 8-channel memory for TR is true (there are two chipsets shown for TR on the USB filings, one labeled like TR40, the other as TR80, then a WR80 (which is likely Epyc)), then I may hold off until the 8-channel is available. It also might require a higher core count than 16-core (unless they used a similar idea to my ID pin where when in the right board, all 8-channels are activated, which would mean they might have all 8-channels on all chips, or just on a few chips, like 32 through 64 core variants). So it will depend.
I also was thinking, with the cost of top binning seen on the 12-core, they may charge like $400-600 over price for the top bin from Silicon Lottery, which would mean it would be better for me to just buy a higher core count chip and roll the dice this time (even if I really don't need more cores but more mem bandwidth).
Also, since Elmor turned over ZenStates to Github, one of the indie developers added the device ID to it for Epyc. So, if he can get all core OC working, I might consider going Epyc earlier than Zen 4, which was my planned switch, just for the I/O AND the mem channels. Unfortunately, mem OC on server platforms is locked down, so you are stuck at 3200MHz, but you can tighten timings and you still get a large bandwidth boost!
I really don't get why lock down the server chips that much. Instead, let them boost to whatever the cooling can handle and get the server companies to beef up the VRM a bit (not crazy, but maybe get them to go from the 7 phases with 60A and 70A mosfets to doublers or a larger controller chip, giving a little more power headroom while spreading VRM heat over a larger area (more mosfets)). That way you have a bread and better, standard server boards that are for the masses, while having a couple boards that can really let it stretch its legs, especially as water cooling is starting to be incorporated more often into the racks.
With that, let it run at the rated speeds, like they are doing, but allow them to flip a switch in the bios or control software so that it has a bias to boost similar to the desktop chips, giving free performance scaling beyond the rated. Many companies won't use it, but I think it would be nice for server workstations, etc.
Doing that also would solve the issue of worrying about TR cannibalizing the Epyc chips. It basically is a win all around and can let OEMs create customized solutions on cooling and chassis to take advantage of it, while further crushing the Xeons on performance. By making it optional, all mission critical stick with rated performance (sometimes just base clock). Anyone else can choose to use that boost and truly scale the performance of those chips. -
I'll disagree. Quad channel mem and 64 lanes of PCIe helps if you want to do add-in cards. I've been considering building out a raid storage array with a raid 6 SAS/SATA card, since I already have a couple SAS/SATA enterprise 8TB drives and am going to be buying at least 2 more by next month.
Unfortunately multi-GPU is dying and my work, at the moment, does not need multiple cards for scaling outside of gaming (although I wish more was being done for cryptographic cards, generally).
But I wouldn't go to mainstream on the basis of dual channel mem alone (at least not until they put at least a 500GBps HBM interface on the CPU with at least 16GB of HBM2/3 on it).
But that is a personal choice there.
Here are my Sisoft Sandra cryptography scores (note the ranking)
https://ranker.sisoftware.co.uk/top...f5c8f9dfb78abf99e1dceccaafcaf7cee89ba696&l=en
https://ranker.sisoftware.co.uk/top...f2cfffd9b18cbc9ae2dfefc9acc9f4c4e291ac9c&l=en
Edit: and yes, I do realize crypto can be done with GPUs, but... -
forgot theres the 3950x still coming in september so hes right kinda pointless theres already a 16 cores. though extra pcie lanes and quad channel is nice to have.
8 channels on 16 cores seems way overkill much more suited for 64 cores i'd say. -
If you don't need the bandwidth, it is overkill. If you do need the bandwidth, then it really isn't.
Also, there are 8 and 16 core Epyc CPUs with 8-channel memory. But it only matters on workloads that need it and is, in part, about keeping the cache system fed. Think of bandwidth per core.hmscott likes this. -
theres no avx 512 i just dont see how u can use the bandwidth unless its very specified workload. on intel it would make sense for avx512 stuff
-
That's a joke, right? Seriously, servers are CONSTANTLY bandwidth constrained on memory WITHOUT AVX512 workloads. AVX512 is a very NICHE use case in servers TO THIS DAY! That is actually why AMD said they will sit back and wait for Intel to get software to adopt it, then they will add the instruction set, but not before that. Pure and simple, they conceded that segment until there is more adoption.
Moreover, back when people thought that the 2990WX was memory constrained, before everyone found out about the scheduler (which was part of the issue, see the latency chart in the Anandtech article for the Rome release), and ignoring for a moment my theory on stale data issues (which also relate to scheduler insofar as the scheduler does not contain higher quality latency awareness characteristics), they were examining, in articles like this one from PC World ( https://www.pcworld.com/article/329...ng-amds-32-core-threadripper-performance.html) memory bandwidth relative to core count. Or how about this one from TechSpot ( https://www.techspot.com/review/1678-amd-ryzen-threadripper-2990wx-2950x/page4.html).
They were not wrong to suspect memory bandwidth can effect performance. It is a matter of whether or not the task the CPU is performing requires the memory bandwidth to keep the cache and cores fed. The more bandwidth, in theory, the better. What matters is that it can keep the cores fed with little down time. Then comes the latency portion (which is what we talked about at length prior, and which AMD needs to work on, but why they greatly enlarged the L3 cache to combat latency to the memory, even at the cost of higher latency when accessing the L3 cache, which the trade off there was obviously worth it).
Edit: think of it as a pipe. You need water. If the pipe is too small, it takes a long time to fill your bucket, right? Think of bandwidth as widening that pipe. If you have a larger size pipe, you can get more water through it quicker. That is how it keeps the cores and cache fed.
Now, then comes latency. You have a large pipe, so you can fill your buckets with ease. But, it still takes a certain amount of time to get that water into the bucket. You can shorten the distance of the pipe to the water source and that can reduce the time to fill the bucket. You can maybe change the material of the pipe to try to get the water to act differently, like reducing friction causing turbulence in the water. Or you can reduce the number of right angles, etc.
In the same way, you have to work on both making sure the volume of data (your bandwidth) isn't constrained for your workload, but you also need to find ways to shorten the time for that data to get where it is going.
Edit 2: Here is an examination that, a bit down, looks at memory bandwidth relative to workload tasks. https://externaltable.blogspot.com/2017/09/performance-analysis-of-cpu-intensive.htmlLast edited: Aug 23, 2019 -
did you not read what i said or you jumping into assumption and conclusion again? im asking you, not the server needs. i mean TR is HEDT so the enterprise would just go straight to eypc and skip HEDT.
what do YOU have use for 8 channel, its way overkill for YOU unless you have something you can take advantage of.tilleroftheearth likes this. -
I have hobbies, which include working with video encoding. In addition, I have plans to pick up the books I have on coding in the future, as some of the things I would like to do do not have software optimized in the way I want, etc.
I resell some hardware, but I more often repurpose it (referring to personal hardware). As such, when it gets deprecated, it is either gifted or repurposed. My 1950X, for example, shall be repurposed into a media server with on the fly transcoding capabilities, among others.
One of the things I plan to learn is creating an A.I. that can pick up logical fallacies in arguments and disconnects between statements. This is both to analyze my source material I am critiquing as well as alert me when I left out a detail necessary for my arguments, all without relying on other editors, etc.
But what I use it for, or have planned for it, is my concerns. Just like separating memory for VMs is my concern, as we are finally getting closer and closer to me just using terminals and back powering everything with servers.
So what is YOUR point? Other than trying to be a prick back for me calling you out on there being uses other than AVX512 and you now deflecting here? -
maybe my point is to ask YOU and what YOU have use for 8 channels. if you dont wanna say then just dont say it rofl. you misread my question and somehow spin it to intel vs AMD just because i simply stated intel has avx512 to take advantage of the memory bandwidth while AMD only has up to avx2.
holy moly sometimes talking with you gives me headache i swear rofl. if you have software you can make use of full 8 channels good for you. i could think of a few but its not gonna matter much 4 vs 8.tilleroftheearth likes this. -
Intel leak;
https://technolojust.com/2019/08/23...a-cpu-which-isnt-powerful-enough-to-beat-amd/
Edit; But I have seen over 50,000 on GB4 with the 3900x;
https://www.techradar.com/uk/news/geekbench-4-benchmark-suggest-an-18-core-cascade-lake-x-chipLast edited: Aug 23, 2019ajc9988 likes this. -
Although English, in its nature, is an imprecise language, you have TWO sentences (which should be three, but typing informally). In the first of the what should be three, you state, clearly, there is no AVX512. The middle sentence asks how I can use the bandwidth, pointing out it needs very specific workloads (which is more what you should have emphasized, and which I did not pay as much attention). The third sentence then talks, once again, about AVX512.
As such, I literally took your focus to be that workloads, or similar, hence why I harped on that. Two out of three statements on that topic would seem you point is more about that than me specifically.
And you make a good point, which is similar to my point on one of the links, that software is often optimized around platform memory bandwidth, generally, insofar as to make sure the memory bandwidth is not a choke-point for the CPU tasks.
Further, AVX2 CAN saturate the memory as well. Just because one has a wider set does not mean the impact on the cores and cache cannot saturate the memory bandwidth, showing you once again, in passing, are making statements that make no sense, which is WHY I go on long diatribes. Make stupid statements, get stupid prizes (at least in regards to interactions with me). -
thats your opinion.
avx2 can saturate memory on a lot of enterprise software for quad channel, but do you use those? doesnt change what i say though, way overkill for YOU.tilleroftheearth likes this. -
Once again, you made a STUPID STATEMENT regarding whether instruction sets, then deflect when called out. At least you conceded it.
Now, let me put this in perspective for you. To achieve the same memory bandwidth of the 8-core chips per core, you would need 8-channel memory on a 32-core. So workloads, even normally seen on mainstream, that can use over 2GBps bandwidth (edit: per core), would benefit by having 8-channel memory on a 32-core chip. Now, it is overkill for a 16-core (for me). But just wanted to provide context.
Edit: did you even look at the articles I attached?Last edited: Aug 23, 2019 -
its you who made STUPID ASSUMPTION!!
theres like so many thing you can assume but you only chooses to see it as intel vs amd.tilleroftheearth likes this. -
No, I also choose to see it that you don't have a clue as to which workloads can saturate the memory, which is why I changed the frame of reference to looking at the mem bandwidth per core for the 8-core mainstream, the 16-core HEDT, and how the 32-core would be halved of what either of those have on mem bandwidth per core. It is called perspective.
Are you saying no consumer workloads get close to saturating the memory bandwidth of the mainstream chips?
I tried to move it AWAY from what you call Intel vs AMD and make it about the bandwidth, which YOU DID NOT RECOGNIZE IN MY STATEMENTS. So move on. -
i choose to see that you have no real need for 8 channel yet you talk as if you do and refuse to admit it. YOU DID NOT RECOGNIZE IN MY STATEMENTS, moving on.
@tilleroftheearth the new i9 comet lake 10c will be the first, hopefully still monolithic design. makes u wonder what sort of latency penalty that will come with it. back then HEDT broadwell E 10c was an 8 core on ring bus then connect to 2 cores with a semi ring? the middle chart without the 2nd set of memory controller/system agent.
new hedt are now mesh but this is the old hedt design. would love to see how this will impact 10-12c cpu from intel in future if they still decide to keep it monolithic. another thing is 10nm yields are so low we just might see they backport willow cove to 14nm, never know. i think at least they might do that for server, needs to stay competitive.Last edited by a moderator: Aug 23, 2019tilleroftheearth likes this. -
AMD Ryzen 3700X and 3900X Shortages Still Persist Almost Two Months After Launch Tomshardware.com | Aug 26, 2019
The shortage is so bad that third parties are selling the 3700X and 3900X at inflated prices. The 3700X is being sold for almost as much as a Ryzen 7 3800X at retailers like Amazon and eBay while the 3900X has been going for as much as $750 (the MSRP of next month's Ryzen 9 3950X which has 16 cores) on Amazon, with most sellers on Amazon and eBay pricing it around $600. Today, you can find 3700Xs at most retailers, but on Amazon they are only up for preorder and will only arrive at the end of August at the earliest.
Other Ryzen SKUs, however, seem to have escaped these supply issues, most notably the 3800X, which is basically the same as the 3700X but binned a little better and $70 more. Perhaps AMD has intentionally constrained the supply of the 3700X to encourage impatient people to just buy the 3800X
For the 3900x supply issues I think AMD rather prefer bin chips as Hell for the coming more expencive 3950x who will go for higher prices
Sadly, there doesn't really seem to be an obvious solution for AMD or buyers other than just waiting or buying what's available right now (whether it's at MSRP or not). It's not easy for AMD to just increase production on these CPUs, which have a unique supply chain and require a 12nm IO chiplet from GlobalFoundries and one or two 7nm core chiplets from TSMC. AMD also uses the 7nm core chiplets for its EPYC Rome data center processors that offer up to 64 cores, which could be a factor as the company ramps up its data center lineup.
There's also the matter of the 3900X and 3950X requiring two compute chiplets, unlike the 3800X and below which require just one. Unfortunately, it remains unclear when we will see widespread availability of the Ryzen 7 3700X and Ryzen 9 3900X.
tilleroftheearth and TANWare like this. -
I have noticed the short fall myself. This is also what happens when you have a high demand item as well. Hopefully supplies increase but this affliction seems to affect both AMD and Intel with the CPU's at times.
hmscott, tilleroftheearth and Papusan like this. -
they might be straight up saving the best binns for 3950x and TR never know. also we knew AMD's supply will be an issue. intel is like 5x-10x size when comes to market share so if AMD wish have a chance at even 50% they'll need to produce at least 5x the amount they use to sell. they dont got their fab and TSMC got other customers too so i could see some shortage problem.
-
You have to remember that each AM4, Epyc and TR2 chip requires a 12nm I/O chip and then each AM4 1 CCX and the 3900x and 3950x require 2 CCX chips and then Epyc and TR will require 4-8 CCX chips each. I doubt they can produce them fast enough. This is the advantage of the monolithic dies.
hmscott likes this. -
There are no advantages to monolithic dies that provide similar / comparible performance as the Chiplet CPU/IO designs.
The chiplet's have better yield, and being so much smaller there are many more produced per wafer as compared to the gigantic monolithic dies. Offloading the IO portion of the die helps out even more as it can be outsourced to non-7nm production - saving 7nm production for the CPU chiplets.
The aerial space is about the same between the designs but the yield for the smaller chiplet's vs monolithic dies provides more "CPU"'s per wafer, more complete CPU's per wafer.
That's why Intel has so many problems supplying monolithic CPU's like the 9900k, taking far more fab "space" to produce the same number of final CPU's, costing them far more to make than AMD's chiplets.
AMD's TSMC fab allocation needs to be bumped up, and likely AMD has already worked out a schedule of fab capacity increases - but the Ryzen 3 has been so popular that the demand is greater than the planned capacity.
Hopefully AMD was able to negotiate a faster ramp up of capacity, or is looking elsewhere to provide the fab capacity, or can help TSMC expand production capacity, or we will continue to end up waiting for product moving forward. -
Even if there are 4 CCX per wafer space as say the 9900k again TR and Epyc require up to 8 along with the I/O die. Eventually AMD will iron this all out, people just need to have patience is all.
Last edited: Aug 27, 2019hmscott likes this. -
not 0 advantages. with intel's 10+ yr old arch, especially it is because it's monolithic + ring bus design that it has overall lower memory latency than zen2. it is about the only reason + frequency that allows it to compete in only gaming benchmark atm and nothing else.
-
Asus: "AMD strikes turbo frequency in Ryzen 3000 series"Sweclockers.com | Aug 28, 2019
According to a report from Asus employees, AMD has begun to curb Boost performance for Ryzen 3000 processors, with extended service life as a reason for the change.
When AMD launched the processors in the Ryzen 3000 family, they offered a decent performance boost compared to previous Ryzen 2000 processors. Some of these performance gains lie in the architectural improvements in Zen 2, and some in the higher clock frequencies the processors reach.
This also includes the turbo frequencies of the processors. For example, the top model AMD Ryzen 9 3900X runs in the 3.8 GHz base frequency with a turbo frequency of up to 4.6 GHz. These frequencies can also be more often reached across multiple cores compared to previous generation processors. These turbo frequencies were originally intended to work at overly aggressive levels according to an Asus representative.
Every new bios I get asked the boost question all over again, I haven't tested a newer version of AGESA that changes the current state of 1003 boost, not even 1004. If I do know of changes, I will specifically state this. They were being too aggressive with the boost previously, the current boost behavior is more in line with their confidence in long term reliability and I have not heard of any changes to this stance, because I have heard of a 'more customizable' version in the Future
The information comes from the long-known overclocking profile Shamino, who is employed by Asus and works with motherboards. In a post on the Overclock.net forum ( via Reddit ), he describes how new motherboard firmware software revisions to the Ryzen 3000 family lower processor turbo frequencies.
The data indicates that the turbo frequencies initially worked too aggressively with the Ryzen 3000 processors, and that adjustments in new versions of the firmware software AGESA reduced these. The purpose of the reduced turbo frequencies is said to be that AMD wants to achieve better durability and reliability in the long run. Shamino also says it has received information that upcoming AGESA updates will offer more adaptability to turbo frequencies.
When AMD introduced the Ryzen 3000 family, Precision Boost Overdrive, a further development of Precision Boost that was introduced with the Ryzen 2000 generation, was also unveiled. With Precision Boost Overdrive, the processor's highest possible clock frequency can be overclocked by 200 MHz when the load requires it.
Thus, with the latest adjustments in the AGESA software, the highest clock frequency of the Ryzen 3000 processors with Precision Boost Overdrive will be lower than what was stated when the products were launched.
----------------------------------------------
As a small curiosity...
AMD to Cough Up $12.1 Million to Settle "Bulldozer" Core Count Class-Action Lawsuit
AMD reached a settlement in the Class Action Lawsuit filed against it, over alleged false-marketing of the core-counts of its eight-core FX-series processors based on the "Bulldozer" microarchitecture
$12.1 million ain't that much but still some $$$$
I expect AMD have ok cash flow from Ryzen chips to pay for this.
Last edited: Aug 28, 2019 -
-
A new one?
AMD Advertises Ryzen Pro As Capable Of Hitting 5GHz Clock Speeds
![[IMG]](images/storyImages/AMD-advertising-5GHz-clock-speeds-feature-740x415.jpg)
AMD recently settled a false advertisement lawsuit with Bulldozer but looks like they might have made a marketing booboo again (jokes aside, this will probably be taken care of before it can do any damage) as the company advertises Ryzen PRO as capable of hitting 5GHz clock speeds. As we know (and as AMD’s official website confirms) this is not the case and unless you are using LN2, you will not get 5GHz out of the box with the Ryzen PRO processors. I am not entirely sure how this got past legal as they are usually pretty vigilant about these things but I have reached out to the company and am currently waiting for a response.
The promo in question can be found on AMD’s official YouTube page and shows 5GHz clock speeds at the 1:34 timestamp. Considering Ryzen PRO cannot hit 5GHz turbo (even on a single core basis) without using LN2, this is something that would probably be considered false advertisement. Although hopefully, AMD would be able to take this video down as soon as possible. The complete video (which might have been taken down by the time this article goes live) can be found below:
-
AMD really need to get their marketing team in line, they've been coming out with some real stinkers lately.
-
They can easily set internally the boost to 5 GHz but like other present Ryzen's it does not mean it will hit it.
hmscott, jaybee83, Papusan and 1 other person like this. -
der8auer’s Ryzen 3000 Series Boost Survey Reveals Worse Than Expected Boosting Wccftech.com | Sept 1, 2019
AMD may be dominating in CPU sales, but they’ve been dealing with one very annoying issue since they launched on July 7th this year and that would be their Boosting. Take for instance the Ryzen 9 3900X features a Boost clock of 4.6GHz and while you would expect it to be for a single or maybe dual-core application, the reality for myself and many others is that it’s just a blip on a monitor radar and comes and goes so fast you’ll likely never actually see it. Mine typically does best at around 4.45GHz on one board and maybe 4.5GHz on the other if I’m lucky. That’s not a bad clock speed by any means, but it’s also not the advertised 4.6GHz either. AMD even went into depth on their new PBO+ Auto OC showing how you could get up to 200MHz of additional power, maybe but I haven’t seen that one work either. -
That's no different than Intel / vendors boasting that the 8700K / 9900K would reach a solid all core 5.0ghz, and then time after time people would report they get 4.7ghz/4.8ghz/4.9ghz and never quite get all app stability in all core 5.0ghz in their desktop / laptop.
Same thing, variability of the system environment, and in fact Silicon Lottery says their binned CPU's might not meet the same high watermark in speed as in their test setup because of that variability.
Give it a rest.
Last edited: Sep 2, 2019Arrrrbol likes this. -
There is a main difference... We talk about stock max Turbo boost. Not overclocking or single core max boost on all cores.
See... Ryzen 9 3900X features a Boost clock of 4.6GHz... Stock.
Same as Intel 9900K features a Boost clock of 5.0GHz. And all get 5.0GHz either it's on 1 or 2 cores who is in the specs.
Thats the main difference. We talk about apples vs. apples. Not apples vs. oranges.Arrrrbol, tilleroftheearth and hmscott like this. -
Well, I guess you and Intel are lucky some people aren't seeing the maximum potential of their AMD Zen 2 CPU's, otherwise the 9900k would be even further down the performance ratings than it is now.
In the few things the 9900k is faster that little bit of extra boost in the AMD CPU's would bring it up and past the 9900k in more and more tests, the further it gets to an ideal performance potential.
Keep working hard to help point that out to AMD so that they can attain that edge to completely surpass Intel CPU's in the next Stepping, or in Zen 3.
AMD are listening, and they tend to fix these problems along the way.
If it won't happen for Zen 2, Zen 3 is only months away...just keep encouraging AMD to meet their spec's, every little bit helps.
-
As it is now... The 12 cores Ryzen 9 3900X is faster. Same will it be for 16 core Ryzen 9 3950X. In multithreaded workflow
https://www.techpowerup.com/review/amd-ryzen-9-3900x/22.html
I hope they fix mentioned problems above. Because can't run stock boost ain't good. Whatever brands chips. -
As I've seen it play out the single core boost is rarely hit due to the OS being active lighting up additional cores before the single core even has time to register full boost.
It may be that the characteristic super fast reaction time of the cores in the Ryzen 2 - even slowed down by a factor of 10 so as to not keep drawing power when "woken up" by monitoring software - are not "holding" the arc to the top spec boost as that direction is changed before or as it gets there.
It's a different faster reacting algorithm.
AMD's already "adjusted" the reaction time once slowing the reaction time of the cores to resist peaking power draw at idle under Windows / monitoring "workload" so that cores don't sustain unnecessary power draw at idle.
To now artificially hold the top boost frequency in 1 or more cores longer than necessary to be "visible" just to make @Papusan and @der8auer happy - to "prove the spec boost is being reached" would further hobble AMD's design, draw power unnecessarily negating the purpose of the core's fast reaction times.
It makes sense that under the cores fast reaction time the top boost wouldn't be seen in stark silhouette, any more than a hummingbirds wings are seen frozen at full extension - it's seen rather as a blur of averages around the full extension of it's wing which is only reached briefly - if at all.
Better monitoring tools are likely the solution - peak hold reading meters comes to mind. Or perhaps a counter that ticks over every time a core hits the advertised full boost - maybe a life time odometer reading for each core, as well as a re-settable "trip meter" counter?
I'm simply pleased enough with the new Zen 2 CPU's kicking Intel out the door of the market with AMD taking their market share in desktop and the datacenter. With AMD doing it first with Zen 1 - then with Zen 1+ - now with Zen 2 - and soon with Zen 3. Who could ask for more?Last edited: Sep 5, 2019Arrrrbol likes this. -
those people are probably confused. their software might have some avx to it that they arent aware. since intel cpu is advertised as 4.7 all core boost thats all they'll get when avx non involved and boost are single core dual core load etc.
zen2 even the default boost has issue, it isnt at the same level, AMD is definitely having some issue and it needs fixes before 3950x come out.
most importantly, we enthusiasts care about overclocking on these high performance cpus. those single core boost dont matter cause they happen like 0.01% of daily usage anyway rofl. we want all core OC to like 4.8ghz minimum, of course higher the better that can be cooled by reasonable measure.Last edited: Sep 5, 2019 -
September 10th is when AMD said they will provide more details on the firmware fixes - the delay is likely for distribution and coordination with vendors so they are ready to make BIOS updates available.
September is also when the 3950x is available, so hopefully those boost fixes will be in the BIOS along with the 3950x firmware updates with the new x590 motherboards (?), and made available for the previous gen motherboards.
AMD Ryzen Verified account @AMDRyzen
8:00 AM - 3 Sep 2019
https://twitter.com/AMDRyzen/status/1168901636162539536
https://www.reddit.com/r/Amd/comments/cz620f/amd_releases_statement_regarding_the_recent_ryzen/
Given the too quick to measure core scheduler for boost vs load, I don't see how a single core boost number is a valid thing to share any longer. It's pretty much a never / fleeting state of existence with so many cores, so many threads in play - how can the single core boost be "visible"? I hope AMD doesn't muck things up just to satisfy a marketing number that no longer even makes sense to publish.Last edited: Sep 5, 2019jaybee83 likes this. -
only one solution: set HWInfo to 1ms recording interval
Sent from my Xiaomi Mi Max 2 (Oxygen) using Tapatalkhmscott likes this. -
That's the problem the AMD Ryzen 2 CPU reacts so fast the monitoring software can't keep up. It's really almost an anachronism of a bygone age - publishing the single core boost - in the age of so many cores and so many simultaneous threads in the OS + user threads, I can't imagine expecting to see the single core boost hit max before it's changed the CPU scheduler to multi-core mode - with the single core reconfigured before it can reach max boost.
1ms is still too long.
Given the too quick to measure core program for boost vs load, I don't see how a single core boost number is a valid thing to use any longer. It's pretty much a fleeting state of existence with so many cores, so many threads in play - how can that been seen / measured - how can the single core boost be "visible"?
I hope AMD doesn't muck things up just to satisfy a marketing number that no longer even makes sense to publish.jaybee83 likes this. -
tilleroftheearth Wisdom listens quietly...
Nice twisted logic, but if it can hit it, it will be reported.
Not hitting it yet. And no, this isn't just marketing (stop spinning...). -
Just because we are enthusiasts does not mean we all care about overclocking the CPU's. Fact is overclocking is the minority even amongst computing enthusiasts. now we do care about the CPU's hitting advertised clocks though.
tilleroftheearth and hmscott like this. -
saturnotaku Notebook Nobel Laureate
Every time I think about going Intel for my next desktop upgrade, they shoot themselves in the foot.
hmscott likes this. -
no one really fully knows what sort of workload they use though. if only MS can somehow make a new OS that uses multi threading while still be compatible to old legacy software then we'd all be happy.
https://www.gamersnexus.net/news-pc/3510-hw-news-threadripper-8-channel-4-channel-leaks
8 channels on TR3000, if only it can use optane dimm..god damn.TANWare and tilleroftheearth like this. -
Optane DIMMs are CRAP! Literally, they are overpriced, under perform, and are ONLY useful in a handful of edge cases. But you don't even understand the need for memory bandwidth at times or bandwidth per core on high core count chips, so I wouldn't expect you to have taken the time to look at the handful of cases for optane DIMMs, or how support for up to 4TB of memory and dropping memory pricing pretty much cratered the utility of it.
-
TR is overpriced for most people from a consumer standpoint but people still buy them. so for those, including us who is willing to pay for the premium, optane dimm is really just that "premium product" that we pay for.
besides getting optane dimm isnt without it's benefit. it'll be faster than any optane SSD via PCIE and dimm can be configured to use as storage and not RAM so it'll actually store data like a proper storage device.
with 8 channels, i can go 6 channel ram and 2 optane dimms, or 4/4 config.
edit: also the workstation with 96-128 pcie lane but no OC support, looks like coreteks' guess was right, dual socket possibly happening.jaybee83 likes this. -
I would hold off the thoughts till we actually see the chipsets etc.. And of course a lot of testing too. Way too early to speculate on what will do what yet.
-
Including this review of the 3700X and 3900X for streaming. Only reason it is here instead of the AMD Ryzen thread is due to his mentioning of considering replacing his 7980XE streaming rig with the 3900X, which has 6 fewer cores. That suggests if a person is considering the 3950X 16-core for streaming, it would definitely outperform the 7980XE.
@D2 Ultima - you might be interested in this video. -
If only Acer / Asus would make their AMD gaming laptops soon after new AMD CPU / GPU releases, and MSI / Gigabyte / HP / Lenovo / etc would jump in with new high end AMD gaming laptops.
Maybe Intel will surprise us all and release a completely new architecture on 7nm that solves all of the security problems, and competes with AMD on price / performance?
Wouldn't that be nice.
ole!!! likes this. -
Haha. Did you see Intel's new side channel attack NetCat? Lol
Ryzen vs i7 (Mainstream); Threadripper vs i9 (HEDT); X299 vs X399/TRX40; Xeon vs Epyc
Discussion in 'Hardware Components and Aftermarket Upgrades' started by ajc9988, Jun 7, 2017.