![]()
'Anti lag is a software optimization to ensure the CPU work doesn't get too far ahead of the GPU to reduce input lag'. Sounds a lot like future frame rendering or a render ahead limiter with a new marketing name.
AMD said the same thing when they released Radeon VII, and well that turned out to be mostly a lie. Will be waiting for third party reviews. AMD will also likely be going up against Nvidia "Super" cards, with first the "old" cards being price reduced. They also come with a blower card at prices similar to Nvidia that has a dual fan design. Aftermarket desirable dual and triple fan designs with superior VRMs built for overclocking will be much more expensive.
-
https://www.anandtech.com/show/1452...itecture-analysis-ryzen-3000-and-epyc-rome/11
Great read for a deeper dive into Zen 2 by Ian Cuttress!
Now, here is the memory speed and IF shift. They did double the pipeline on IF2, which is nice.
At 3733, the IF is running at 1866MHz, roughly. It would take running the memory at 7,466 to get the IF back to the same speed as that at 3733 when in 2:1 mode. That isn't happening (at least until DDR5, coming soon). What that means is it will be the challenge to get the memory, if the ratio can be manually set, up to 4000-4200. I chose those frequencies because those are about the fastest that Zen+ achieved, granted with looser timings.
With a good set of subtimings, that latency, even at 3600 or 3733 should be able to be lower, potentially, depending on what the extra latency is on the IF after the memory call. I can get 58-62ns memory latency on Zen 1. Others regularly can get down the mainstream chips to the 60ns range. So, the chart above seems that with some tuning, people may be able to achieve what AMD did or better in real life ram tuning, which is nice. Still need to wait for verification.
But, with IF, there is a limit on how much more speed will reduce the latency. I looked at this a couple years ago in this thread with Zen 1, and I called out Ryan Shrout for misleading on the latency of IF compared to that of Intel's mesh on the HEDT platform (to be clear, the mesh still won, it is by how much that was in issue, as Intel's mesh saw the same reduced latency while using faster memory).
So, this could be exciting and fun if you are a memory overclocker, if for no other reason than to see how far you can go with 1:1 ratio. Or to see if you can match the 3733 by getting above 5100MHz with tight timings. To be clear, 5100 on ram would run the IF at 1275, or just slightly less than you would get with 2600 speed ram. The bandwidth is still double from IF1, though, so this is more about the latency reductions by speeding up the IF rather than the added bandwidth compared to prior gens (although if keeping it 1:1 above 3600, you will get both increased bandwidth and decreased latency, just with increased mem latency and slower speed there). So much testing I want to do on the trade offs and to see at what mem speed you would need to make up for the trade off. -
"The AMD 3rd Gen Ryzen Deep Dive Tech Briefing - Delivering the deep dive tech briefing this time is Robert Hallock. Robert, if you don’t know yet, is the Senior Technical Marketing Manager at AMD."
The AMD 3rd Gen Ryzen Deep Dive Tech Briefing!
Tech ARP
Published on Jun 11, 2019
Delivering the deep dive tech briefing this time is Robert Hallock, AMD's Senior Technical Marketing Manager.
The presentation slides and additional graphs are available at the website below.
The AMD 3rd Gen Ryzen Deep Dive Tech Briefing!
Posted by Dr. Adrian Wong, Date: June 11, 2019
https://www.techarp.com/computer/amd-3rd-gen-ryzen-3000-tech/
"AMD just revealed the 3rd Gen Ryzen family of processors at Computex 2019 and E3, with six models built on the 7 nm process technology! Join us for a deep dive into the AMD 3rd Gen Ryzen technical details with Robert Hallock!"
Hearing the details outside the confines of the noisy and rushed stage presentation, along with additional supporting material at the website should answer more questions. There are additional AMD briefings coming up and I'll try to find sources for them I can share as well. -
I still think water cooling is the way to go on the Navi cards, if being honest, and I want to see how far it can go in a premium loop. Plus, if you want more compute power, the Navi really has some offerings there versus the Nvidia cards. If you want raytracing and tensor cores, even though not heavily employed in games to date, the 2070 is there. If you want raw frame rate, the used 1080 Ti is there.
I'd have to do a price/performance analysis at release, but I agree with JayzTwoCentz here, AMD released something competitive here. And, as you and he mentioned, seeing Nvidia's response is something people should wait for, along with third party reports. Here is his video.
But, considering Nvidia set the pricing structure, it seems consumers have a LOT of choice at the $450-500 market segment.
Seeing AMD admit they were just going to do a die shrink at first, then said forget that and went for a full redesign is AWESOME. Also, many journalists are confused on if the 15% is over Zen or Zen+. Considering Ian Cuttress asked for confirmation multiple times on AVX and other points, him saying the 15% IPC is over Zen+ kind of makes me trust his answer over others, although it still needs confirmed in independent testing.
When I did my calculations before release, I assumed 13% IPC over gen 1 and 8-9% over Zen+, giving about 5-6% IPC over Intel. If the 15% average (13% in CB15) is from Zen+, that is around 18-19% IPC over gen 1 Zen. That gives an IPC over Intel of closer to 10-11%, varying by task, of course. That is also why a 4.5-4.6GHz all core clock is comparable to Intel's 9900K all core clock of 5GHz. The question is whether that frequency on all cores is realistic on Zen 2. If it is, the $400 chip will definitely be one to look at, depending on the OC headroom compared to the 3700X.
When I previously did my estimates, I thought 13% IPC over gen1 and 11% speed increase over peak all core OC. Well, we still have to see the peak overclock, but the better IPC increase means it will take as low as 5% on the increased all core clocks (around 200MHz or a little more than that) to be at the lower end of my estimate on increase over Zen original, although my estimate was high as I gave equal to slight edge to Zen 2 when reality is equal to slight edge to Intel, depending on how well these CPUs clock. (Yeah for having an idea on the performance since around last Sept. or Oct., decent prediction to hold up, potentially, 9 months later).
AVX2 without the frequency offset is another nice feature, although they did mention that the chip will throttle as it would with any other workload, so the peak performance of AVX2 will be something to watch.
Also, the Windows scheduler is an important change. It, instead of spreading spawning cores out, will spawn the threads on neighboring cores on the same CCX first. That means there will be die hot spots and potential for degradation if people are not mindful of their overclocks. Not really a concern, as they tested and feel secure in it and that the added benefit is worth the risks. But, it does make me wonder if they will allow a per core temp reading rather than the die temp reading, which would be useful with this change.
And finally, PCB porn of the package the chiplets are on with trace layout:
No core chiplet to core chiplet comms. All goes through the I/O core confirmed. Likely the same with Epyc. Standardizing that latency has some benefits, which combined with a core/CCX aware scheduler should definitely help with performance (AMD claiming up to 15% in some games, which was not included in their numbers presented). -
@D2 Ultima @yrekabakery - I'd like to get your opinion on the FX features. Will this be the next tessFX (meaning adopted by a couple then abandoned, even though less performance hit)? Opinion on the lag switch on the GPUs? Etc.
As to the CPU, my performance estimate of 24% is still in play, although pricing was wrong. Only reason my estimate is still in play is less frequency, but potential IPC better than estimated (see my comment above on Ian Cuttress at Anandtech mentioning again that the 15% IPC is over Zen+, which is that basis).
Also, what was your opinion on that 12-core playing at 1440p and encoding on slow OBS 10000 @1080p@almost 60fps? -
-
yrekabakery Notebook Virtuoso
Anti-lag is probably the equivalent of Nvidia’s max pre-rendered frames, which AMD calls flip queue size but does not expose in its driver GUI (it can be tweaked in the registry).
The OBS test was unrealistic to put the higher core count AMD part in a better light because people don’t stream using those settings, they use faster presets or GPU encoding (e.g. Turing NvENC) and at lower bitrate.Last edited: Jun 11, 2019Talon likes this. -
There is a difference between unrealistic, like using a low resolution of 720p to show the performance of a CPU and this.
It is like Intel telling reviewers to try a dual stream at medium to compare their chips to AMD's 1700X or 2700X, then claiming it unrealistic when AMD hits back.
NVenc doesn't have the performance impact, and Intel's optimizations are known for CPU encode to prioritize gameplay frame rate over encode. Now, you get frame rates PLUS even higher quality encode on CPU than NVenc or on Intel, or being able to run multiple streams to different platforms at a lower quality easily, all for the same cost.
Some nuance is needed in regards to Intel frame rates when overclocked and needing to see the 12-core overclocking capability, but I wanted to point out there is a difference on "realistic" being used in this way.hmscott likes this. -
yrekabakery Notebook Virtuoso
-
-
what i really want reviewers to test and see are these:
- 1:1 2133mhz vs 2:1 4266mhz with relative timing and see how the performance differ between each setup and what software benefits on which configuration.
- intel with & without mitigation patches VS zen2 with & without OS optimization.
i'd like to see Ian from ANAND and Paul from Tom's to do test with intel + mitigation patch vs zen2 with OS optimization, performance difference will be like 20-30% even with the higher clocks on intel LOLhmscott likes this. -
Will be keeping an eye on the 16 core $750 CPU myself. Don’t need the cores but why not. Unless the gaming performance turns out to not be as good as they say.
Nobody uses slow preset for encoding and if they did they would certainly my want an external machine. Most streamers use fast or medium to maintain quality and have access to quick sync or Nvidia encoder. Again marketing magic!Papusan and yrekabakery like this. -
Mixer supports 10k bitrate as does youtube, though. But again, 10k is the highest limit there they'll allow (last I checked, which was about a month ago).
For new NVENC this is mostly correct. There is still the performance hit of the capture mode, but that is a lot more negligible since RAM is out of the picture with new NVENC encoder directly grabbing from the framebuffer. It also gets rid of most of the GPU limit = lag issue that is still present on old NVENC (which still has its uses) and x264 and VCE, which you can see an example of here, where I limit my FPS and the stream stabilizes (I run the limit around 5 minutes, so you can watch for 2-3 minutes then skip ahead).
This is not the case as far as I've experienced it. It is purely the stream software. Xsplit pulls as much as it can ever need from the system and the game suffers largely as a result, and if you raise OBS' priority ( which can be done here) you'll get a similar effect. OBS is designed to allow the game to flourish more than not out the box.
I looked at the demo they were doing. The game they showed supposedly had no performance loss (they still cited 90fps), which confuses me. I do not take that test as valid. Do not misunderstand me: I do NOT feel that there is no gained effect with a 24-thread vs a 16-thread for CPU-based encoding. But if the 9900K was maxed out to its limits where only 2.1% frames were shown, and was running at its stock 4.7GHz with decent RAM, then the 24-thread should barely have any CPU power left, and the game should be running rather badly. Hear me out:
They cited a Tom Clancy game. This is a Ubisoft PC title. It IS very CPU heavy, and very RAM dependent. They showed 90fps on each, indicating a GPU bottleneck, but it is clear that the CPU was doing quite a bit of work. Click this video for the timestamp where a youtuber named Wolfgang is testing the game. He's using a RX 570 and he's getting well under 90fps, and his 8700K is bouncing between 35% and 50%. So a solid 90fps means there is a fairly strong GPU and both CPUs are being eaten to a not-insignificant degree. Even if we assume the 8700K to be 1:1 in clock/core for clock/core, this means that the AMD 12-core is sucking up over 25% in the same conditions at pretty much any time, and the 9900K is closer to perhaps 30-40%.
If the 9900K is so overloaded to the point where it can only send 2% of the frames to the stream, it must be COMPLETELY maxed, and shouldn't be getting anywhere near 90fps in the game anymore. For the AMD CPU to be delivering a completely steady framerate like they boasted, it means that the extra... 15% CPU power leftover is enough to go from 2% frames delivered to 98% frames delivered. That does not seem correct to me. Especially for the game to also remain above 60fps the whole time.
BUT I will call the scenario they claimed valid, as they specifically said: 10k bitrate, streaming to youtube. That is 100% valid.
When some people get their hands on it and try their own encoding to see if it works, I'd like to look at that. I do not trust AMD's (or ANY COMPANY'S) marketing showcases for things like this. Especially considering some claims I've seen and everyone's attempts to sabotage competitors in the past. As it stands right now, the only method I could consider for their proposed test case being correct is if that 9900K was both:
1 - Not using Turbo (3.6GHz)
2 - Not using good RAM (2400MHz perhaps)
while the AMD chip was turboing to a high degree and good RAM was in use.
There simply isn't "enough" extra CPU power just waving around for that drastic a difference. If they told me that it dropped about 30%-40% of the frames and in-game FPS tanked, that's much more understandable, but they insisted that gamers saw full fps and streamers saw (virtually) none.
Now the real problem with marketing it to streamers is simply how good Turing is in my eyes. If we're considering free x264 medium at any resolution/bitrate/60fps, it means that AMD needs to be at least equal. The 16-core would be a nice CPU but is $750, where a spare 1660 is roughly $200 and can run second monitors and a $150 board and $300 CPU is enough to run the game alongside it, and doesn't need any special work for optimum performance. I'd like to see what the 12 and 16 core CPUs do in heavy CPU games at high framerates like Monster Hunter World (which eats my entire 7700K for 70fps sometimes) or Black Ops 4 (which is a game where high framerates is very desirable to the player).
CPU encoding does have its advantages and even if you're doing x264 medium vs Turing NVENC, x264 will have better colour reproduction even if sharpness does not increase, as well as being able to downscale secondarily (there are two types of downscaling: primarily, at the basic output stage, and secondarily, in a post-process stage. As you can see, NVENC (new) lacks this secondary downscale option. If AMD is able to reliably, without much end-user tweaking, provide heavy CPU compression and good in-game frames in a 16-core CPU that isn't $1500 USD, then great. It's definitely an option. I'd like to see you or some other competent end users get it, though, especially if I can instruct them on how to check properly and report to me so I can sift through the correct kind of data.
I'll also wait on neutral party testing of the CPUs. I remember how they insisted that the 1800X schooled a 6900K before launch and what actually happened in the wild. But if they truly raised IPC to a large degree and the CPUs EASILY hit 4.5-4.7GHz and a $200+ board is not necessary to do so, then I'd say intel's gonna be sweating. I am more interested to see how they handle dual rank memory sticks and large amounts of memory, though.Last edited: Jun 11, 2019Papusan, yrekabakery, ajc9988 and 1 other person like this. -
Edit: correction, I alluded to it. I did not directly say it. My bad.
Now, how could they have done it? One way is to use Process Lasso or just set affinity in Windows, attaching the game to certain cores within a single CCX or two CCX on one die (amounting to four cores/8 threads or six cores/12 threads dedicated to the encoding with the rest dedicated to the game). But, this even emphasizes why knowing how they setup the stream is important.
Note, also, Jim from AdoredTV also said to ignore the OBS demonstration because when AMD used it on Zen 1 release, they setup the Intel machine incorrectly, which botched the results to show them having extra performance, which gets back to your original point on test conditions.
But, considering the prior game performance they showed against the 9900K and the 3900X were roughly in line, then there is the distinct possibility that restricting the game to 6-cores on one die on the Ryzen chip and 6-cores on the 9900K, both at stock, would show the same gaming performance in that demo.
Other points to consider: in AMD's tech day press announcement, they mentioned that all tests were run without all of the vulnerability mitigations applied on the Intel CPUs, while they also did not use the May 1903 Windows update which contained the new scheduler in Windows that has CCX awareness, where the threads will first spawn to cores within the same CCX complex, which in games can give up to a 15% improvement in performance. In that way, they gave extra performance in some ways to the Intel 9900K while sandbagging their own chip with the old scheduler. MCE was turned off, so the Intel 9900K was running at the stock 4.7GHz all core boost, but able to boost to 5GHz on single core.
That was just to detail known test conditions, which are not a replacement for independent reviews.
Now, I'm not understanding where you estimated these numbers from: " this means that the AMD 12-core is sucking up over 25% in the same conditions at pretty much any time, and the 9900K is closer to perhaps 30-40%." There are many embedded assumptions in there. Let's say that the performance of the 12-core is within single digits of a 7920X at stock, don't care if you go plus or minus. Now, are you assuming that the additional all core boost of 4.7GHz relative to the 7920X lower boost clock of 4GHz for all cores works out to be worth a difference of about that much? It isn't a bad assumption nor are those numbers that far off from my eyeballing. Just trying to understand where the estimates came from and the underlying assumptions.
But, I believe other slides showed an extra 25-31% performance in productivity software over the Intel 9900K. If true, then you may be underestimating the left performance in the tank to a degree, up to 10-15%, which also could be part of it. (a counterpoint is that Handbrake had a much lower performance delta of about 8 or 9%, meaning you could be being generous on your 15% estimate comparing the two, which would further suggest my affinity argument above is correct and AMD was being misleading, as you and Jim from AdoredTV both suggested).
Either way, the affinity argument I gave actually has more likely the effects than your proposed way in which they tested it. If AMD did reach the 15% IPC claim, which Ian Cuttress at Anandtech has said it is over Zen+, whereas Gordon Uhng (sic?) from PCWorld said over Zen 1, which I do trust Ian over Gordon generally for reporting, then there is a chance that you are not fully understanding the difference in performance this generation of CPU has brought.
Ian detailed, excruciatingly, much of the changes for the IPC uplift here: https://www.anandtech.com/show/14525/amd-zen-2-microarchitecture-analysis-ryzen-3000-and-epyc-rome
Unfortunately, I am still waiting for the Threadripper update, which will likely be a CES 2020 launch (make me wait forever). But, without the NUMA issue and with a Windows scheduler that can recognize and group by CCX for thread spawns, there should be significant performance uplift in the next gen CPU on that platform.
Another unfortunate thing is motherboard prices for X570 are up, but the bright side is that the quality of board is higher, with some even having Thunderbolt 3 on board (detailed a bit here: http://forum.notebookreview.com/thr...ga-polaris-gpus.799348/page-604#post-10920660 ). Motherboard vendors disclosed to GN that they are selling more AMD boards than Intel boards, which matches other disclosures of AMD selling around 66% of the market for DIY purchases (AMDs market share on desktop, though, is currently around 18% on whole, DIY is a subset).
(but here is the video mentioned from TechYesCity
RedGamingTech talked about it a bit here:
As to the frequency on OC, we have to see. But, you will need a better VRM board from X470 or need an X570 potentially for doing some heavy OCing on the 16-core 3950X. Rumors were when heavily OCing, you may hit up to 300W on the chips, so at least an 8-phase with 60A would be nice for enough buffer. And most of the mainstream boards used 50A phases, if not 40A. So recommendation on cheap boards to drive the 12-core and 16-core are a definite must wait until we get independent reviews on power draws and overclocking. But, the IPC should be above Intel's, with the question being by how much.
Ultimately, one month should tell us more. And I would like to thank you guys for your well thought out responses. I do thank you. -
https://mobile.twitter.com/GamersNexus/status/1138567315598061568?s=19
“AMD's game streaming "benchmarks" with the 9900K were bogus and misleading. We did those tests ages ago and the 9900K is nowhere near as bad as it was painted. You can force it to be bad, but it's very forced.”
AMD being called out by Gamers Nexus now. Glad it’s not just a few of us calling it BS. Tech Jesus himself saying BS lol!Papusan, tilleroftheearth and ajc9988 like this. -
Direct link to the article showing the encoding at medium in Fortnite and DOTA2.
Also, see my comment above on my theory of them using affinity to accomplish it, and my statement on it being misleading.
What game were they streaming again? Honestly forgot.
And, even AdoredTV said that the streaming demo could not be trusted here:
https://adoredtv.com/amd-e3-live-blog/Talon likes this. -
ajc9988 likes this.
-
My theory on the affinity is they locked 6-cores on each CPU (one die on the 3900X, so that there was no off-die latency to the other die for the game), then locking 2-cores on the 9900K to OBS and the 6-cores on the other die of the 3900X to OBS would likely cause the results seen there. -
Ok I've gotta cut this up for replies, I'll be stitching relevant parts together.
Using this information, if I considered the AMD CPU to be the same (100% on 1 core 8700K = 100% 1 core 3900X) ignoring all clockspeeds/RAM/etc configs, then I did it like this:
8700K at 90fps = probably 50% on average or higher (someone in another server told me tonight that 7700K gets eaten alive by Division 2 when aiming for higher fps as well)
8700K at 50% = 3900X at 25%, let's assume this to be around 30% (non-linear scaling)
8700K at 50% = 9900K at 0.5/1.33 = ~37.5%, let's assume this to be closer to 40% (non-linear scaling)
If a 9900K is 2/3 of a 3900X, then 60% of a 9900K (leftover power into gaming) is around 40% of a 3900X. If 40% of a 3900X is not even capable of 3% of that stream, then less than double the performance isn't going to grant near 100% of that stream. I DID math it out a bit unfair to AMD saying 15%, but it'd likely be closer to an extra 20-25% it has (accounting for game spikes). That's where I got the math from. Now, I explained the stream stuff above already, but just explaining my mathing out here.
Rage Set, Papusan, ajc9988 and 1 other person like this. -
1) are we talking 8 cores or 8 threads for utilizing x264?
2) Is this limitation if you are trying to lock it to only 6 cores on the CPU using affinity, rather than trying to share the resources between the game and OBS and letting windows sort it out?
2.a) by this I mean if you used affinity to only allow OBS to use 6 cores (not selecting threads for SMT, just the thread attached to the specific core and leaving the other thread empty) could it be done (depends on answer to number 1)?
2.b) or is this a problem of trying to use any cpu with more than 8-cores just makes OBS not want to work with it at all, regardless of limiting the cores.
2.c) if using affinity, wouldn't that be restricting it so it wouldn't even use anything beyond the 6 cores you locked it to?
I did word that part a little funny. What I meant is locking the game to 6-cores on one die (12-core chip) using affinity, then locking the other 6-cores to OBS using affinity, in that way OBS isn't trying to use more than 8-cores.
Read this before the above questions: Other than that, you did mention trying affinity with Luna's rig, so that wouldn't matter. That does beg the question of how in the hell they accomplished it. Makes me want to see not only it in a neutral setting, but I want them to publish their settings for OBS for scrutiny, that way people can see HOW to replicate it, if it can be replicated at all.
But this is why I wanted your opinion on it specifically, because I know you know streaming and can sniff these things out with that test. Also because I learn!
Does OBS use AVX?
Many of the boards are rating up to 4400-4600MHz on ram overclocking, which suggests AMD isn't lying about the 3200MHz ram support and recommending 3600MHz ram be bought for these (AMD said it, meaning if it doesn't work, they can get hanged by public opinion on the internet). So, I'd say the memory issue, to a degree, is resolved.
With that, there is the further issue that they added a 2:1 memory speed to Infinity Fabric switch, something ole brought up up above. With that, to reach higher speeds (which AMD put at 3866 and above), you have to cut the IF speed in half to go higher. That means lower bandwidth on the IF and more latency on it if you want higher mem speeds and lower mem latency (so you have to take the latency hit on one or the other, or clock the mem slower and tighten timings like crazy).
So, the memory support should be better. But, it isn't the full story. Trying to use 1:1 mode and pushing the ram to 4000+ will be likely where the action is. Yuri Bubly (creator of the DRAM calc) said that there is a manual switch for the 1:1, 2:1 mode, so if true, then they might have it auto switch and you would have to manually switch it back to 1:1 if using above 3733 memory.
And the 5133 demonstrated was at CL 19-21-21 or 18-21-21 I believe, but using 2:1 on mem to IF ratio, which means IF was running only a little above 1250MHz, whereas 3600MHz runs the IF at 1800MHz.
So not a simple thing to explain to gamers, which will just want to buy fast ram and run with it without understanding the implications (likely why AMD recommended 3600MHz).
Either way, competition is back. Seems like Intel is letting AMD take desktop market share (not much they could do to stop them), but desktop is the smallest piece of pie. Mobile is a larger market, and that is why Intel is dropping the Ice Lake U and Y series there, which is very competitive with the AMD APUs and is a direct punch back at their expected market share gain in that segment.
Intel also plans to use the 10nm+ lines to bring Ice Lake to servers next year, trying to beat back AMD's attack there with their 64-core chips.
To give you an idea, for desktop (including OEM and commercial desktops for deployment in office spaces), AMD has 18% market share. For Mobile, AMD has 12%. For server, AMD has mid to high single digits. Desktop has the least money in it, but the most mind share. Mobile has more money in it, hence Intel dropping Ice Lake this year. Server is the largest margin, most money sector of all. That is what Intel is trying to combat the most, hence gluing together two 28 core CPUs for Cascade-AP and requiring water cooling. To be fair, Cray supercomputer created custom water blocks for AMD's Epyc server processors being used in Shasta and Frontier, so it isn't exactly like either company is not using water cooling in servers moving forward.
Intel is trying to keep AMD's growth in mobile limited to 6% (meaning 18% of the market). They are trying to keep the market share for server under 14-20% (that is going to be tough). So, desktop, with the smallest amount of money in it, they just want to keep the performance crown for mind share and to keep moving their highest margin parts, even though the shortage on fab space is supposed to alleviate during Q3 (who knows if that is due to more mobile chips being made on 10nm, losing desktop market share which alleviates demand, getting a little of the extra capacity online from spending billions to get more 14nm capacity or what, probably a combination).
Now, if only competition on the GPU side would force down prices (long way to go on GPU side)...
And agreed, today has been a good day and lots of tech talk! -
I tried to lock OBS to 24 threads and 20 threads, I didn't try 16 or 12, but in each case it still "overloaded". By overloaded I mean OBS itself reported large amounts of dropped frames in the log file, as well as the resultant stream/video was very choppy.
I think it's a problem with more than a certain number of allotted threads period. I know the number is above 12, because 5820Ks and 5960Xs worked great.
James D likes this. -
GN often goes off the rails when they don't understand something, and then reign it in and say they will test it when they "get it". They should have waited. AdoredTV the same. Wait until the parts and full information are in hand, verify the test parameters and setup with the vendor, then set it up and see if you can duplicate the results - if you can't, don't complain about it - ask the vendor for help setting it up as they have it set up.
Only when you have taken the time to set up and successfully replicate the setup and test - or fail to replicate it - that you can then go public in a definitive way.
Also, complaining about Benchmarks or Test Bench configurations that don't represent reality? Really? Of course Benchmarks don't represent real usage, until it does. Benchmarking to test and find the limits of hardware is what it's all about. Right now those settings AMD used may not be practical - but they are valid settings and the results are valid as well, whether they are useful or not right now isn't the point of the test.
Technology changes, performance increases and capabilities widen - to the point where settings and configuration limits designed to cap capabilities at the limits of the servers - or limits of the feed / clients - can and will change based on improvements over time. Limits can be raised, additional load (post processing) can be added and resolution / FPS / bitrate to the client can be increased as network allows.
AMD is hitting the limits of the OS to fully utilize the increase in cores / threads, and to use the increase in performance overall in applications that have been designed for much lower core and lower performance hardware. Now that the hardware has much more headroom, of course you are going to start exploring exceeding the established limits by increasing the settings.
When I'm testing previous configurations with new hardware - more cores, more threads, more throughput, more memory speed, etc - I'll tweak the limit settings to see how far they can be increased before saturating the hardware - and where results degrade.
That's what AMD did. Took a well established configuration for streaming and widened the load further than the 9900K could support given the simultaneous gaming load, and pushed the 9900K to it's knee's but within the capabilities of the 3900x.
First AMD shows us that the higher core count 3900x can keep up FPS to FPS with the 9900K in games:
Then AMD shows a "real world - taking streaming to the next level" demonstration of live gaming and streaming ( Starting at 28:55) - the settings are valid and details of the settings are given throughout AMD showing the live streaming and results:
AMD did nothing wrong by doing that and showing the results. Are the results reproducible? I assume that's how AMD set it up - publishing the settings in the presentation is a big clue that AMD *wants* reviewers to replicate their results. AMD should be ready to be more than helpful to reviewers wanting to replicate the test bench set up.
And, more to the point. AMD having pushed the limits of OBS settings, pushing the limits of the width of the stream past currently acceptable limits, should be used to encourage further experimentation to develop useful - visible - improvements using the additional performance over and above what 8c/16t CPU's like the 9900k can provide.
Now we need to listen to the useless bickering over how AMD doesn't have the completely useless RTX features that Nvidia can use to say "but AMD don't have our useless real-time ray-tracing hardware" - currently useful at improving gameplay in Zero games, and not available at all in 99.99999% of available shipping games.
Ray-tracing shouldn't be considered as a feature for comparison until it's in at least 10 games. What are we at now? 2 current games, and 2 retro-games that had high FPS even on crappy laptops 20 years ago?
It's a shame that spending a lot of money on hardware for useless ray-tracing smoke and mirrors seems to empower complaints about higher performance and less expensive hardware that doesn't come with that useless feature.
Hopefully as time goes by and new AMD CPU buyers buy matching Navi 10/20 GPU's over the next year, the ray-tracing noise will subside.
AMD should have continued to ignore ray-tracing. I hope AMD doesn't waste hardware resources on ray-tracing and continues to do what Nvidia should have done - deliver new higher performance GPU's at the same or better price than the previous generation. Comparing AMD GPU's to AMD GPU's that's what AMD has done. Good AMD. Bad Nvidia.
"Did NVIDIA Win?" Ray Tracing, Ft. Gordon of PC World
Gamers Nexus
Published on Jun 11, 2019
We join Gordon of PC World to talk about the ray tracing marketing battle between AMD & NVIDIA for RX 5700 XT launch.
Damn Surfer 4 minutes ago
"The question is if you could have the performance of a 2080 TI w/o the RT for a cheaper price would you choose it? yes please, and that's why we won't get it cause that's what we want."
flioink 10 minutes ago
""Did NVIDIA Win?" They sure made a lot of money from this (currently) useless tech that nobody uses. I'd say that is "winning" even though I personally dislike their scummy anti - consumer practices."
And, that's really the rub, the fact is that RTX features provide Zero useful value for gaming. In dark areas RTX ON makes things too dark, in bright areas RTX ON washes out or weirds out colors and textures. RTX costs too much in FPS load even if it provided some value.
The only value RTX provides is a brain block to rational thinking for some people stuck on "but it doesn't have ray-tracing!!"
Lets see what AMD GPU's releases for the PC when the AMD powered Xbox and PS5 are close to shipping, so that those games can be ported with full features and performance on PC hardware.Last edited: Jun 12, 2019bennyg likes this. -
Leaked prices for the Asus X570 motherboard strengthens high price levels Sweclockers.com | 11, 2019
Previous statement by MSI about high prices for motherboards based on the X570 chipset is further strengthened by new leaked price images for the Asus X570 motherboard.
---------------------------------------------------------------------------
Alleged ASUS AMD X570 Motherboard Price-list Paints a Horror Story Techpowerup.com | June 11, 2019
FEATURED...
A reliable source based in Taiwan shared with us the price-list of upcoming AMD Ryzen 3000 X570 chipset motherboards by leading manufacturer ASUS. These MSRP prices in U.S. Dollars paint a grim picture of these boards being significantly pricier than previous-generation motherboards based on the AMD X470 chipset. We already got hints of AMD X570 motherboards being pricey when MSI CEO Charles Chiang, who is known for not mincing his words in public, made it clear that the industry is no longer seeing AMD as a value-alternative second-fiddle brand to Intel, and that AMD will use its performance leadership to command premium pricing for these motherboards, even though across generations, pricing of AMD processors are going to remain flat. The Ryzen 7 3700X, for example, is launching at exactly the same $329 launch price as the Ryzen 7 2700X.
Even MSI CEO Chiang's statement couldn't prepare us for the prices we're seeing for the ASUS motherboard lineup. The cheapest AMD X570 motherboard from ASUS is the Prime X570-P, which is priced at USD $159.99. Its slightly bolstered twin, the TUF Gaming X570-Plus will go for $169.99. A variant of this exact board with integrated Wi-Fi 6 will be priced at $184.99. This is where things get crazy. The Prime X570-Pro, which is the spiritual-successor of the $150 Prime X470-Pro, will command a whopping $249.99 price-tag, or a $100 (66 percent) increase! The cheapest ROG (Republic of Gamers) product, the ROG Strix X570-F Gaming, will ship with an HEDT-like $299.99 price. This is where the supposed "high-end" segment begins.
Yeah, Purchase the new Ryzen 3950X and quality board + fill up with good memory won't be cheap. Yees, a major change from AMD.bennyg, hmscott and tilleroftheearth like this. -
https://www.newegg.com/p/pl?N=100007625 601311650&PageSize=96&order=PRICE
x470 motherboards are on sale now because the x570 motherboards are just around the corner, but even so there are 8 choices at or below $150, and 20+ higher in price.
The x470 motherboards were never that cheap. If you want a cheap Ryzen motherboard you get the entry level or mid level models, the A320 and B450 motherboards are cheaper. The B550(?) / A520(?) models will come later.
There are 6 x470 boards between $150-$200, 11 x470 motherboards from $200-$299, and 3 x470 motherboards $300-$400, one 1 x470 motherboard @ $405.
There are usually a lot more x470 boards listed, so newegg must have run many of them out of stock before the x570 release.
The x570 motherboard prices look "normal", not inflated.
Whether the BOM between the exact same config x470 and x570 is a few dollars more or less, it's insignificant as compared to the bump in performance.
Intel is far more overpriced, and always has been, it looks like AMD is holding the line on costs, which is great.Last edited: Jun 12, 2019 -
Can't you see the change? Other canOf course the board being better, but the prices will follow after as well. The old days with much cheaper. Is gone with the wind. Blown away.
Ashtrix, hmscott and tilleroftheearth like this. -
The range of prices you posted for x570 motherboards mirrors the list prices of online listed x470 motherboards.
Especially high end ROG Motherboards, I think all of the ones I've set up have been over $300 and approaching $400 at times. And, that has been the ROG level motherboard price range for 10+ years, probably as long as they have been available.
If there are instances of price gouging by vendors for their most desirable models, it's to be expected at launch as the retailers will / would gouge further - the hardware vendors are learning to price their most desirable wares appropriately at launch so they get the benefit of the high prices exchanged, and if that happens I don't think it's unique to x570.
I've also noticed in the past that popular motherboards models will jump in price between generations - taking advantage of the good will generated that the model name evokes, but then a new model also comes out to take over the previous price point.
You can be taken in by the vendor when relying on the "model name" from generation to generation to pick your motherboard, check out all the models spec's around your price range - and then compare features / value against the now price-inflated model name you (and everyone else) liked from the last generation.
Don't buy the price-inflated x570 models, instead find better value for features models - maybe new model / vendors with the configuration you want and at a price you can afford.
There's no need to overpay for an x570 motherboard!
As always wait for the excitement at launch go down a bit, and wait for prices to drop - or find a more reasonably priced alternative to your #1 choice - sometimes there are as good or better alternatives that sit ignored with better pricing, just because some particular model(s) get hyped up by the media doesn't mean they are the best.
Also, wait for testing, wait for reviewers to pull apart the top 10-20 x570 motherboards and rate them for performance and value.
This is just the time to *not* buy based on the highest price - don't get suckered in to thinking if it's the most expensive it must be the best.
Additionally, the AMD i9 3900x and 3950x are different animals requiring power power and power thermal cooling, likely not supported on cheap x470's or cheap x570's. You'll need to find out which high-end full power x470 and x570 motherboards are recommended as supporting those 2 top tier AMD i9 3900x and 3950x. The CPU's are $499 and $749, I wouldn't be surprised to find that the motherboards that support them costing $250-$500+,Last edited: Jun 12, 2019Papusan likes this. -
I take notice what MSI’s Ceo said. I expect he has first hand info what will come. And he won’t be alone put higher prices when everything is out.
But yeah, we won’t see the fully picture before most is pushed out and prices is/are put in stone
Edid. Regarding high prices. I saw an article about that Apple has gone bananas with prices. But as long people buy it anyway this madness will continue. Only when people say enough is enough we will see a change.Last edited: Jun 12, 2019Ashtrix, tilleroftheearth, ajc9988 and 1 other person like this. -
Additionally, the AMD i9 3900x and 3950x are different animals requiring much more power and thermal cooling, likely not supported on cheap x470's or cheap x570's.
We will need to find out which high-end full power x470 and x570 motherboards are recommended as supporting those 2 top tier AMD i9 3900x and 3950x.
Those AMD 3900x / 3950x CPU's are $499 and $749, I wouldn't be surprised to find that the motherboards that support them cost $250 - $500+.
We can still shop around, wait for reviews to tell us which are really delivering reliable power as promised, then decide which one to shop for - and wait for your chance to pick one up - and hope it's not been price jacked by the seller.
If you don't like the price of an x570 motherboard don't buy it. If they sit on the shelf and don't sell then the vendor / seller will need to rethink their pricing strategies.
The best time to price shop for x570 motherboards for the 3900x / 3950x might be around the time the x670 and Ryzen 4 is to be announced.
AMD sells the x570 chipset at the same price no matter which motherboard it is built into. That same x570 chipset is used throughout the whole lineup of x570 motherboards. If the cost of the motherboard varies and gives more profit then that profit goes to the motherboard maker, not AMD. btarunr needs to stop blaming AMD.
Discuss (87 Comments)
Metroid Posted on Jun 11th 2019, 3:10
"If the cheapest asus x570 motherboard is $160 then I consider it okay. The cheapest asus x470 I can find around is $132. The only issue here is that stupid fan.
https://www.newegg.com/asus-tuf-x470-plus-gaming/p/N82E16813119107
ASUS TUF X470-Plus Gaming = $132
ASUS Prime X570-P = $160 "
Tomorrow Posted on Jun 11th 2019, 3:24
"Indeed. These prices are mostly what i expected them to be. The only suprise here is the Formula at 700$. For what i ask? Waterblock? Aorus Xtreme is supposed to be 600$ and by the looks of it has more features, better connectivity and massively better VRM than Formula.
On the low end im actually surprised. I feared no X570 board will be under 200$ but if ASUS has some models under that the others will definitely have such models too because historically ASUS has been more expensive with their motherboards than others."
Metroid Posted on Jun 11th 2019, 3:28
"If Asus itself is starting at $160 then I might consider it, there might be worthy cheaper x570 motherboards after all, at the moment i'm eyeing that asrock b450M-pro for $80.
https://www.newegg.com/p/N82E16813157843 "
https://www.asrock.com/mb/AMD/B450M Pro4/#Overview
Amazon
kanecvr Posted on Jun 11th 2019, 13:44
"We're talking about Asus here. Overpriced has been their middle name for a good few years now. I'll probably be going with whichever manufacturer provides a good quality vrm setup, decent bios and good price/performance ratio. That means Asrock, Biostar or Gigabyte."Last edited: Jun 12, 2019 -
Here is a nicely done shorter 15 minute version covering the best parts of the E3 2019 AMD Ryzen / Navi / Games announcements:
AMD Next Horizon Gaming at E3 2019 in 15 minutes
Engadget
Published on Jun 11, 2019
The biggest announcements from AMD at this year's show.
The new Ryzen 2 / Zen+ Ryzen + Radeon APU models were left out of the main AMD Presentation, check out the video below for discussion of how cheap and plentiful these will be on B450 / B350 at a great price with improved coolers.
Lots more AMD E3 2019 topics discussed, they've got good advice on a number of related topics, don't preorder, wait for testing, nows the time to buy previous generation Ryzen + Radeon as well as used CPU's, Motherboards, GPU's, etc all coming up now and after teh Ryzen 3 / Navi products start hitting the shelves. It's a great time to build Ryzen / Navi / Vega / RX / APU computers.
...Is it 'The ONE'...!? (3950X & Zen 2 Discussion Ft. Wendell)
Tech YES City
Published on Jun 11, 2019
Wendell and I just finished watching the AMD Next Horizon Gaming event, where Dr. Lisa Su announced all the new Ryzen 3000 CPUs. This stack also includes the new $749 Ryzen 9 3950X - though with a release date staggered 2 months after the July 7th Launch of the 3900X, and also what I consider the 'value king' of the stack, the Ryzen 7 3700X - is it even worth waiting for the 16 core? Especially if you are a gamer? Let's discuss.
Last edited: Jun 12, 2019 -
And GN did say that they were told pretty much all sets tested up to 4000 were pretty much drop in by MB vendors. So, cheap ram should be viable this round, getting rid of the complaint of needing high priced ram, which is good considering B-die is dead (Samsung stopped producing a little while ago).
As to Intel on desktop, I say Intel is letting it happen, but it is almost like the John Wick line of "John Wick will come for you and you will do nothing because you can do nothing." That is with the desktop being Vigo's son.
Intel just wasn't prepared for AMD to come back after Bulldozer (understandably). But Jim Keller worked his magic. Now, with the lag of using the main tech in APUs, AMD just will not make a lot of headway on mobile for awhile. Intel, as I pointed out, is doing 10nm there first, then to servers, because that is where the money is. Every percent of market share there hurts Intel more than just giving up desktop to AMD (especially the DIY build your own market where, according to MindFactory's sales Zen 1 had like 3 months or so at 2:1 sales, then Zen+ had around 2:1 sales since around last fall, so 8-9 months, and now Zen 2 dropping, Intel will lose in the DIY market, but has OEMs as a saving grace, which is where the majority of sales are, even though mind share is with DIY and enthusiasts). For mobile, if AMD would be able to start putting the newest GPU on the APU, even with the CPU core lagging one gen behind starting with Zen 2, I really think they could make more headroom, and that they will need to as Intel releasing better GPUs starts next year (we see it a little with Ice Lake-U, but...).
Also, at the beginning of May, Lisa Su said their entire mainstream stack has 50% margins on each CPU or higher! That means that AMD will be more profitable. They also increased R&D from around $100M in 2014 and 2015 (2014 was higher than that, 2015 was in the $93M or $94M range, cannot remember) to $140M in 2017 and 2018. A 40% increase in R&D is nothing to snuff at. This is why I have hope for them, as they should be able to use that to help bring the GPU side back up, while still driving on the CPU side (here's hoping the Zen 2 does well and AMD pours tens of millions more into R&D moving forward, as well as cultivating relationships like they did with Apple, Sony, Microsoft, and Cray (now owned by HPE)).
But, as I mentioned to Papusan, Intel's only 2-3 years away from chiplets using 2.5D and 3D integration as well. But, counterpoint, TSMC was able to increase clocks while Intel had clock regression on 10nm. So, there is a chance Intel got an IPC uplift (the testing for 18%, in the footnotes, said not all mitigations were used when calculating 18% IPC from skylake to ice lake, and they did change the L2 cache size around Skylake-X, although I forgot if they incorporated that into mainstream Coffee Lake/R chips or not), the question is how much. But, we won't be able to test that (Ian Cuttress at Anand said he was going to test the IPC claims when products were ready for market). The speed regression was mentioned in the TechYesCity video hmscott posted above.
Even with the higher IPC on Zen 2, you still need around a 4.5-4.6GHz all core clock to be around Intel's 9900K at 5GHz all core, and that still varies by task and we don't know how the AMD chips OC. I mentioned some posts back, not sure if you saw them, that Intel also started the OC protection warranty again for $30 for a one time chip replacement and created their own auto-OC software that will overclock and stress test it for you up to a pre-defined thermal load, then back it off a couple hundred MHz. Manual tuning will still be best, but that means any owners worried about OCing now can use Intel's utility and gain that performance while keeping their warranty intact. That puts pressure on AMD needing some good OCing for the high end enthusiasts in the DIY segment. But, AMD also gives 50% more cores at the same price point, so there is that (only applicable if multitasking heavier or having apps that can scale on cores). It's a very interesting time in PCs. And I haven't even touched on the server side.
They did say no MCE on Intel for any of the tests, that not all mitigations were applied which helped Intel's desktop chips to have higher performance, that they didn't include testing with the new MS scheduler, which sandbagged themselves, etc.
For OBS, you may have a point for REVIEWERS, but D2 isn't a reviewer, nor does Jim at AdoredTV often review hardware, although his new website does (so that may change overall). Because of that, speaking of past behavior on his part and asking for skepticism is fine, and the community asking for more information because we cannot figure out how it was done is fine. Reviewers have a direct channel to the vendor so that they can, and arguably should, reach out to figure it out. But considering the internetz does not has channelz, it is perfectly acceptable for them to question it and ask how it was done.
D2 even pointed out that x264 was preferred for color rendering, etc. So, getting the documentation and helping the streaming community do better, along with heading towards needing a single system instead of doing video capture or buying a second graphics card or etc. is something that would help. It isn't saying it is impossible, necessarily, it is trying to figure out HOW.
AMD evidently did some work on it. But, not just reviewers need to know how it was done. If AMD and reviewers know, that is great, but having a quick start guide for OBS settings for use of slow/er in OBS would help the community significantly. As mentioned, OBS does use AVX. AMD greatly improved being able to do an AVX 256-bit instruction set in a single cycle, IIRC (see Ian Cuttress's Anand article discussing AVX changes). That did likely contribute to the increased capabilities (along with the 50% more cores). But it isn't necessarily wrong for us to speculate.
Now, AMD took the criticism seen on Reddit and other places and incorporated it in their tech day press event so that reviewers could pass the answers along to consumers. That is AWESOME! Now, here is one more thing they need to pass along the info or publish on their own. Regardless if Intel got sandbagged on performance in the comparison (which would be misleading), NO ONE thinks Intel could do slow while playing such a demanding title and get 60 frames on the streaming side. So, even if they botched Intel's settings, who cares. They still achieved a hell of a feat with the 60FPS streaming, and we want to know how, mainly because it ISN'T currently possible, and that WOULD be a selling point. D2 even talked about the problems they had setting up streaming on Intel's 18-core parts. If there is a work around to increase streaming settings on these 12-core and 16-core parts, people with the Intel platforms would love to know as well, which then you would be comparing it to 7960X/7980XE/9960X/9980XE, and regardless of their performance, AMD is giving you that at less than half of the Intel core count costs, meaning AMD still wins.
So this isn't something to get defensive about. It's more that this sort of skepticism should be expected from the community at large. It isn't a bad thing, just something that needs explained. And since reviewers are embargoed until July 7th, AMD has a choice: 1) say nothing and let this continue, or 2) just give the information to a reviewer, like GN which was critical about it, and let them play with those settings on existing hardware to give the performance on Intel's HEDT with those settings (in other words, allow them exclusive content before release without breaking embargo, which turns around their skepticism and lets the community see what was going on), while also allowing them to get the Intel comparison charts and testing done now so that once they get their sample and start testing, it is easier for them to show on day one coverage. Personally, I would go with the second option, especially since GN does regular testing of streaming (one of the few outlets). They have standardized their tests, so if they need to make changes to their methodology, giving that info to them now, quick, fast, and in a hurry is necessary so that they can update that methodology, otherwise they may not get that performance, you have last minute back and forths between them and the vendor, etc. Compounding headaches. GN also often admits when their statements or comments are wrong, so hitting them up now would be a good way to address it.
And they have high level settings published. Now, if it is really plug and go, that would be amazing. But as D2 pointed out, above 8c/16t, it takes a little more manipulation of settings, and those are what the community is after. I guarantee he would try them out with our friends with 7980XEs just to see if and how they work on those. If they work, then you will have a person that is awesome at streaming and very active in the online community also vouching for it (even if the Intel chips don't hit the same as the AMD chips, which will leave some skepticism, it will be partly resolved now, then fully on the published reviews after embargo lifts, which will help drive sales at that point). This is the community engagement part that AMD is working on, and that Intel is working on with some of their recent hires. Overall, there is a HUGE opportunity here. Let's see if AMD takes it.
As I have said, AMD has shown us something new, and people are interested in it because there are benefits to it. But, inquiring minds....
When he looked at 2700/X overclocking on B450, he primarily recommended boards in the $170-190 range for the PCB quality, the VRM characteristics, and other board features. I'm sure he will do it again.
People will need to say enough is enough AND ACT COLLECTIVELY to not buy certain products in order to force prices down. But, there also needs pressure elsewhere, like on the US gov't regarding ignorant trade policies that are harming consumers globally and shaking up established manufacturing and supply chains because the US realized that it was losing its grasp on the global economic hegemony. I digress.hmscott likes this. -
This will be noticable in high movement situations like racing games and fast paced shooting games with bright colours like Overwatch where colour clarity may be slightly compromised in comparison to x264 medium, but in no way will it look bad. It's just a specific x264 medium vs Turing NVENC win, especially where I truly consider x264 medium to be not worth the investment over a standalone 1660 (or even just plainly using a 2070 or 2080 without a second card investment) and a cheaper board/CPU combo in general.
Also, I want to point out that while streaming is viable with the know-how and a good enough setup, when it comes to recording, Turing NVENC 100% wins in all cases possible since you can use the x264 medium-esque compression quality all the way up to 4k 60fps with at least 50,000 bitrate (likely higher all the way up to 130,000 bitrate; haven't checked the limits when the card's encoder will start chugging). Since x264's CPU load scales with resolution, bitrate, fps, and the visual fidelity of what's on-screen, to do the same would require a much lower compression amount but a much higher bitrate, multiplying filesize by an order of magnitude and making compression after editing or just tossing into handbrake less effective visually.
This is why I've spoken purely in a streaming scenario this entire time, and also why I still rate Turing NVENC so highly. I even have a much higher appreciation for Pascal NVENC since Max Quality options have shown up, since it acts more like "faster" than "veryfast" which is a large chunk of previously missing quality. To the growing streamer market, allowing those settings to be used by programs like OBS is probably the best marketing Nvidia could have done, and it has routinely caused me to tell people to forget CPU encoding since it happened.
If someone only streams though, and never records, CPU encoding does have a chance. But I would only truly consider it on the 16-core chip, and even then I'd want to see what AMD did to force OBS to use all that power officially. Because it won't matter otherwise, and if a lot of people decide it's a good investment and actually attempt to use medium or slow and it chugs out like it does on 7980XE chips, this is going to bite AMD HARD since they specifically advertised 1080p 60fps "slow" streaming at 10k bitrate, which is no small task.
In the end, I sincerely think that HW acceleration is going to be the way to go. Just like ASIC miners became necessary for bitcoin because of their specialization, having specialized encoding blocks will prove better as time goes by due to things like... the power they save (you can run 1080/60/slow on a 16-core if you really know what you're doing but that thing is going to drink roughly 300W and burp whereas a 1660 is going to sip 50W and only because it's actually clocked up, not because the encoder is drinking a lot) and the ease of setup, as well as the potential integration into many things. When x265 becomes commonplace in about 8 years and compression efficiency doubles, what do ya think is going to happen to encoders capable of hammering out x264 medium quality? Things would get roughly double the clarity and streaming media consumption will simultaneously look better and consume less bandwidth at the same time, perhaps allowing for everyone to run 1080/60 at 6k bitrate and 1440/60 at 10k bitrate or maybe even better depending!
I also really need a desktop xD I want these encoders, god cow it! -
And I do agree, ASICs are going to be where it is going, but that gets down to chiplets. Moving forward, because you are using chiplets, you can incorporate ASICs onto the package, which will allow for acceleration at specific tasks. I previously made the point that while AMD is mastering disintegrating into chiplets, Nvidia is working on specific dedicated cores, like Tensor and RT cores. Those will eventually likely become their own chiplets at some point after disintegration happens for GPUs into multiple chiplets.
Intel is working on AI ASIC chips and is working on chiplets as well, then integrating in Foveros (3D packaging). The heat issue is enough I don't see it moving to mainstream or server chips for years. But, Intel also has experience mounting on passive interposers. You heard me go on about AMD's whitepaper on Active Interposers. All Intel would need to do is create an active, rather than passive, interposer, deal with the issues of packaging chiplets on interposer using smaller contacts (AMD for 7nm to mount to package MCM used copper pillars).
Intel has their own details when they talked about Foveros on how to accomplish the mounting. So the entire industry is already heading in the direction that you suggested.
But, that is a great point. They all should be working on hardware ASICs to accelerate BOTH x264 and x265. Also, the 4k@60 I was not aware of with NVENC. But, we will find out soon enough on what will happen on the encoding power consumption.
I'm still stuck on a 980 Ti, but I know I can see a visible difference when I use the CPU versus the GPU for transcoding (I try to stick with remuxing as much as possible, but there are times that a transcode needs done). But there is always the question of where the line should be drawn on diminishing returns and where the trade offs are (such as getting higher resolutions for a little less color reproduction).
As to needing a desktop, yes, yes you do! -
As for NVENC, here's how it lays out:
Kepler: mostly x264 veryfast, 1080/60 50k bitrate limit (am unsure if it goes higher resolution, can no longer test).
Maxwell: mostly x264 veryfast (sharper than Kepler slightly), easier to use higher quality presets, at least 1080p, 60fps at all resolutions and 130k bitrate limit
Pascal: roughly x264 faster, 4k 60fps, at least 130k bitrate.
Volta: closer to x264 fast, 4k 60fps, at least 130k bitrate.
Turing: mostly x264 medium, 4k 60fps (possibly 8k, unsure), at least 130k bitrate
To my knowledge, AMD VCE has not once improved its x264 offerings and is worse than Kepler NVENC in maximum bitrate and similar in visual quality. It is arguably at the level of intel quicksync in visuals, and x264 veryfast is easily considered a(n albeit small) step up.
Quicksync is lul
Sent from my OnePlus 6T using a bionic coconutajc9988 likes this. -
AMD Navi Radeon Display Engine and Multimedia Engine Detailed
by btarunr Yesterday, 22:43 Discuss (21 Comments)
https://www.techpowerup.com/256481/amd-navi-radeon-display-engine-and-multimedia-engine-detailed
"Two of the often overlooked components of a new graphics architecture are the I/O and multimedia capabilities. With its Radeon RX 5700-series "Navi 10" graphics processor, AMD gave the two their first major update in over two years, with the new Radeon Display Engine, and Radeon Multimedia Engine. The Display Engine is a hardware component that handles the graphics card's physical display I/O. The Radeon Multimedia Engine is a set of fixed-function hardware that provides CODEC-specific acceleration to offload your CPU.
The Navi Radeon Display Engine features an updated DisplayPort 1.4 HDR implementation that's capable of handling 8K displays at 60 Hz with a single cable. It can also handle 4K UHD at 240 Hz with a single cable. These also include HDR and 10-bit color. It achieves this by implementing DSC 1.2a (Display Stream Compression). The display controller also supports 30 bpp internal color-depth. The HDMI implementation remains HDMI 2.0. The multi-plane overlay protocol (MPO) implementation now supports a low-power mode. This should, in theory, reduce the GPU's power draw when idling or playing back video.
The Radeon Multimedia Engine is updated with support for more CODECs. The "Navi 10" GPU provides hardware-acceleration for decoding VP9 video at formats of up to 4K @ 90 fps (frames per second), or 8K @ 24 fps. The H.265 HEVC implementation is more substantial, with hardware-accelerated encoding of 4K at frame-rates of up to 60 fps. H.265 HEVC decoding is accelerated at 8K @ 24 fps, and 4K @ 90 fps, and 1080p at up to 360 fps. H.264 MPEG4 encoding gets a boost of 4K @ 150 fps, and 1080p @ 600 fps decoding; and 4K @ 90 fps and 1080p @ 150 fps encoding." -
Seems hmscott came in with more info on AMD's changes on GPU. Still needs independently confirmed, but if true, would be nice.
And Quicksync has always been lul. Even their hardware acceleration in Photoshop/Premiere was shown by Puget Systems to cause some blurriness, etc.hmscott likes this. -
At first I thought this Geekbench score was from the 5ghz LN2 run, but that score was higher @ 65499 points.
Perhaps this one is on regular air / water cooling?
AMD Ryzen 9 3950X with 61K points is the Fastest Processor on Geekbench, Destroys Intel’s 18-Core i9-9980XE
By Areej - June 12, 2019
https://www.techquila.co.in/amd-ryzen-9-3950x-vs-intels-18-core-i9/
"So, as AMD’s new 16-core Zen 2 flagship has now been officially launched, we now know for a fact that the Ryzen 3000 lineup won’t be limited to the 12-core 3900X.
According to AMD’s first-party benchmarks, the Ryzen 9 3950X is faster than Intel’s i9-9960X, but as per a new Geekbench score, the 7nm AMD flagship might be even more powerful in certain scenarios. It scores a mammoth 61,072 points in the multi-core test which is the highest for any consumer CPU, period.
The closest Intel competitor is the 18-core i9-9980XE (with 46618 points) which gets left far behind with a margin of more than 14K points, all the while costing more than twice as much (Ryzen 3950X~$749, i9-9980XE~$2000+).
The best part is that this is an early sample, slower than the final chips that will hit the market. We are looking at a base clock of 3.29GHz and a boost of 4.29, while the 3950X in its final state runs at 4.7GHz when under load.
Despite that, however, the single core performance and the IPC of the 16-core AMD flagship is higher than the 9th Gen Intel lineup. Only the higher clocked i7s and i9s manage to match it in the single-core test.
Furthermore, the chip is running on an X470 board and the cache size along with the Matisse codename confirm that this indeed is the Ryzen 9 3950X, and not a future Threadripper part."
More at the link above... -
Anyway, we will see eventually. If only those gn people knew I existed to contact me for testing guidelines for streaming and such.
Sent from my OnePlus 6T using a bionic coconut -
Also, I will not use the 4.7GHz until I know that can be used for more than single core. Although you raise a good point that an ES of the chip may have been 4.3 boost, not meaning all core. But my bets are it was a 4.3GHz all core OC on an ES, which those scores are nice!
But, due to the lighter MT workloads incorporated into GB4 versus GB3, they use ST performance in GB4 for points, while GB3 MT is used for points on HWBot as it has heavier workloads for MT (shows that Intel's influence changed their workloads, yet the enthusiast community of OCers saw right through it). With that said, AMD is still dragging behind in GB4 on ST performance, BUT compared to last gen, significant improvements (and with the 4.7GHz boost, they may have closed the gap, considering this is 400MHz lower than the final silicon for ST). So, I'm expecting to see over 6,000 points in GB4 single thread once the final chip releases if that is true. Time will tell.
Also, wondering why there are almost no gains for the LN2 score versus this when this is 4.3GHz and that is 5.35GHz. I'm betting they pushed max core speed, but didn't tune the mem for faster speeds, which limited the IF and increased latency slightly. But that is speculation.
Either way, looking good.
Edit: pretty sure it was the same Richie...hmscott likes this. -
The great news about all of this AMD Ryzen / Navi attention is that reviewers are going to climb up out of their Intel / Nvidia holes and start investing time and hardware in upgrading their work systems to AMD Ryzen / Navi, and not just review AMD hardware "cold", they'll be invested in getting it optimized for their own use and hopefully apply what they've learned to reviews and share it with their viewers / readers.
Also, maybe with all of this AMD Ryzen / Navi / etc success and growing numbers of owners, maybe Adobe will finally update their applications to be optimized for AMD
Otherwise Adobe's competitors ranks will be swelling at the same time Adobe stops getting those juicy payments, license renewals, and monthly / quarterly payments. It really is ridiculous how poorly Adobe has treated AMD owners by refusing to fix problems and improve performance. But, as per HardwareCanucks this may be true of all Adobe Customers, not just AMD owners.
Switching BACK To AMD Ryzen - A NEW Threadripper PC Build!
HardwareCanucks
Published on Jun 12, 2019
A lot of the news focus is on Zen 2 and the upcoming Ryzen 3000-series CPUs but we decided to move our primary editing workstations away from Intel and towards AMD Ryzen Threadripper 2. What a change from our current choice of Intel! That decision was prompted by a removal of Adobe Premier from our daily workflows. With that in mind, let's take a look at this epic editing / gaming PC build and how it performs.
Why didn't they build a nice 12 core Ryzen 3 system? Or wait for a Threadripper 3 announcement? Maybe they had the CPU and parts leftover from reviews and were getting tired of all the Adobe Premiere Crashes, and the Ryzen 3 announcements got them thinking of what can we do now to stop the pain? Switch back to Threadripper 2 + Davinci Resolve!ajc9988 likes this. -
Now, the reason for doing this now is to pick up a cheap CPU and create content, while also getting a production machine. They will review the new Zen 2 series chips doing the same work, but this shows that picking up a TR, then later upgrading, is viable.
What I'd like AMD to do is go on HWBot and tick off where the global points are for hardware. (also, as community outreach, working with HWBot would be nice, as finances are a bit tight for them and it really is squeezing them hard as an organization). Also hoping the RTC bug is gone on these new CPUs. Considering their close work with MS, hopefully that is resolved in the new hardware like the scheduler fix is.hmscott likes this. -
https://www.pcgamesn.com/nvidia/nvidia-anti-lag-sharpening-amd-radeon-alternative
Nvidia says it has offered anti-lag settings like AMD’s for “more than a decade” -
AMD’s Secure Processor Firmware Is Now Explorable Thanks to New Tool
Joel Hruska on June 7, 2019 at 9:20 am
https://www.extremetech.com/computi...firmware-is-now-explorable-thanks-to-new-tool
https://github.com/cwerling/psptool
I answered here as it was OT for that thread. There are no reported vulnerabilities, yet.
Edit: Updated Security Vulnerabilities Zen vs the new Zen 2 / Ryzen 3:
https://www.reddit.com/r/Amd/comments/c09tz7/security_is_very_dank_these_days/Last edited: Jun 14, 2019ajc9988 likes this. -
AMD Ryzen 3000 "Matisse" I/O Controller Die 12nm, Not 14nm
Techpowerup.com | June 13, 2019
AMD Ryzen 3000 "Matisse" processors are multi-chip modules of two kinds of dies - one or two 7 nm 8-core "Zen 2" CPU chiplets, and an I/O controller die that packs the processor's dual-channel DDR4 memory controller, PCI-Express gen 4.0 root-complex, and an integrated southbridge that puts out some SoC I/O, such as two SATA 6 Gbps ports, four USB 3.1 Gen 2 ports, LPCIO (ISA), and SPI (for the UEFI BIOS ROM chip). It was earlier reported that while the Zen 2 CPU core chiplets are built on 7 nm process, the I/O controller is 14 nm. We have confirmation now that the I/O controller die is built on the more advanced 12 nm process, likely GlobalFoundries 12LP. This is the same process on which AMD builds its "Pinnacle Ridge" and "Polaris 30" chips. The 7 nm "Zen 2" CPU chiplets are made at TSMC.raz8020, hmscott, ajc9988 and 1 other person like this. -
-
I have very little interest. Now if there were a 16core that would score 5,000 in CBR15, I might have dumped TR but that is not what it is.
hmscott, ajc9988, Papusan and 1 other person like this. -
And supposedly those will make the mainstream 16 core look meh!
Yay for us TR owners!!! -
It's funny, I still get people saying "hey, what happened to my FPS?, is there something wrong with my GPU?, it's at 45FPS!", what did you do differently?, "got Metro Exodus and turned on RTX", well....
Many people don't really understand the ramifications of what they want - they hear RTX blurted out at them for months, so they think they need it. They don't know what RTX entails - losing performance in the form of dropping FPS 50% - more or less - and that unless you have a 2080 / 2080ti the frame rate drop may bring the average down below 60 FPS.
As they notice the other negatives found with RTX / DLSS finding those features are undesirable in normal game play - and gameplay is better with RTX / DLSS turned off - add's to the owners disenchantment with Nvidia,whether they admit it or not. But it's too late, they've already blown their money based on Nvidia RTX BS driven manipulations, and that's "How Customers are Meant to be Played".
AMD has decided to skip real-time Ray-tracing for now as it is too expensive in it's current custom hardware implementation, and there is no real benefit to delivering real-time ray-tracing to every consumer - there aren't enough games.
To deliver the kind of RT RT performance that is acceptable makes the GPU so expensive so as to limit the market. AMD might as well seed developers with the development GPU's they need to build the feature support directly where it counts, which is what AMD / Microsoft were doing for ray-tracing (DXR) even before Nvdia tried to take Ray-tracing for themselves by branding it as "RTX".
Jayz rambles around for a while, but eventually gets there, and while I think it could be stated better, people seem to respond to his delivery, and he asked AMD about this topic before he made this video:
Why did AMD skip Ray Tracing on Navi cards?
JayzTwoCents
Published on Jun 14, 2019
When NVIDIA launched the RTX series GPUs, most people complained that it was an expensive feature that nobody wanted... but when AMD launched Navi, people complained that it DIDN'T have real-time ray tracing capabilities... so why didn't AMD include it?
Last edited: Jun 15, 2019 -
There are x570 vs x470/etc questions to be answered, IDK if it's possible to answer them all before x570 ships (unannounced surprises), but GN gives it a shot, which can help us plan for what we need for the best performance from the new Ryzen 3 CPU's and Navi GPU's:
AMD X570 vs. X470, X370 Chipset Differences, Lanes, Specs, & Comparison
Gamers Nexus
Published on Jun 14, 2019
Explaining the AMD X570 chipset differences versus X470 and X370, like PCIe lane count, USB3.2 devices, 10Gbps USB, and more. AMD's X570 chipset will accompany the Ryzen 3000 series motherboards at launch, but the persistence of the AM4 socket means the new CPUs are compatible with the old chipsets. To get people up to speed, aside from the PCIe Gen4 differences, we wanted to walk through the chipset differences and comparison against the previous chipsets.
Last edited: Jun 15, 2019ajc9988 likes this. -
also was hoping for possibly 7nm i/o die by zen 3, surely they have worked out to get it to 7nm without other penalties. im after the 7nm power efficiency on I/O and 7nm+/6nm on core along with 7nm chipset.
AMD's Ryzen CPUs (Ryzen/TR/Epyc) & Vega/Polaris/Navi GPUs
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Rage Set, Dec 14, 2016.