The 1950X is the only CPU that hasn't paid for itself, as it was a personal expense versus my HEDT Intel procs being business expenditures. Hence why I still have my 6950X, 6850K, and 7980XE. My hope was to use the 1950X in my main editing rig for my (startup) YT/Media company. We'll see. Anyway, I'll shut up about that now and go back into hiding. Hahaha.
-
-
Well I wish you luck in your endeavor! And let me know when things are going so I can check out the channel.
And I definitely understand, as the 7980XE beats the 1950X by a good ways. Luckily, the new TRs really kick up performance. Are you a premiere or resolve man? And do you prefer GPU acceleration? What card do you have? Did you see the 3950X build GN put in a video this evening for a cost effective render rig at $2200?Rage Set likes this. -
Well the 7980XE will be making its way to someone as soon as I get my new build up and running. I am a resolve man even though I was trained on premiere for years. Resolve actually uses our hardware, including GPU's (still a reason to use SLI, ha!) and most importantly, doesn't require a subscription. I haven't watched the video yet but I'm going with TRX40 due to the crazy amount of lanes that I will happily put to use. I'll certainly let everyone know. Just making sure I have a firm foundation. I already have two cameras ready for action, Panasonic GH5 and S1H.Robbo99999, Mr. Fox and ajc9988 like this.
-
Since you are using resolve, have you considered a 1P Epyc platform? Don't know how many GPUs or lanes you plan on, but if the bulk is done on GPUs, there may be some advantages, depending on your situation.
-
I considered LGA3647 and Epyc 2nd Gen. What pushed me to TRX40 is the ability to overclock (even though I won't be able to push it as far as I like) but 128 lanes does appeal to me. I am going back to one workstation and one gaming rig. So that means much of what I have is going up for sale, including the KPx cards.ajc9988 likes this.
-
I really wanna try resolve so I can move away from relying on only macOS.
I am so used to Final Cut Pro X and it works so well for me. But it limits me to my macbook only (yes I know about hackintosh).
I have my Area-51m + my desktop with way more processing power but Resolve will take awhile for me to learn.
Didn't like Adobe premiere when I tried it a long time ago. -
https://hwbot.org/submission/429451...oom_geforce_gtx_980_(notebook_mxm)_8639_marks
https://hwbot.org/submission/429451...oom_geforce_gtx_980_(notebook_mxm)_4427_marks
https://hwbot.org/submission/429451...oom_geforce_gtx_980_(notebook_mxm)_1586_marks
iunlock, ajc9988, Mr. Fox and 1 other person like this. -
4.0Ghz stable this time, also 4.3Ghz single core.
Almost beating my 9750H in the AW m15 R2 I tested.
Next goal 4.3Ghz all core then 4.5Ghz.
Single skinny 120mm radiator in push-pull. Temps ain't too bad.
I thought Haswell was much more behind due to IPC, guess not.
iunlock, Convel, Papusan and 1 other person like this. -
Resolve isn't as easy to learn as Final Cut Pro X or Adobe Premiere, especially its node system. However, there are numerous tutorials out there and Blackmagic has their very own (FREE) training/guides. Download the free edition and test it out. The free edition is as feature packed as Final Cut Pro X and Adobe Premiere. When you're ready, you can get the Studio edition....if you ever need the advance functionality (most people don't).
EDIT: I will say that Resolve is one of the few editors that will take full advantage of your hardware, so much so that they have a benchmark app. At the very minimal, test that out. -
Okay guys need some help...so I bought a 16GB (2x8GB) 3200MHz DDR4 kit during black friday and my desktop already had 2x8GB 2666Mhz DDR4 RAM. I bought the exact same corsair LPX memory but higher frequency(it was cheaper buying 3200Mhz over any of the others)
I was running XMP Profile 2 - 2666Mhz @ 1.35v before. It also had a 2400Mhz @ 1.20v XMP profile.
My new RAM only has 3200Mhz XMP @ 1.35v profile.
When all four sticks are installed together, I run them at 2133Mhz (no XMP) and are detected fine as quad-channel etc.
When I look up the XMP profile, I can only use 2400Mhz @ 1.35v which is not in either stick. Windows only sees 16GB using this but CPU-Z & hwinfo show 32GB.
What would be the easier way to run 2666Mhz @ 1.35v again like before? I'm sure the 3200Mhz sticks should have no issue downclocking and running those timings.
Or is there an APP that shows me all the required timings for the BIOS?
Thanks
Yeah I played with it once already, need to play with it more. Cross platform is way more useful.iunlock likes this. -
I've had that issue before with X99 with my ASrock board. I'm fairly certain with the same LPX memory. I also have 4x8gb of those 3200Mhz sticks but I run them at 2666Mhz CL12 as anything above this causes the system to shut off quad channel and run dual which obviously defeats the purpose. With my board I am able to select XMP, then set the speed to 2666Mhz, and then I boot with all timings at AUTO. This resolves the issue for me, and then I simply go back and slowly tune the timings down and it works fine. I have found that if I also go too tight on timings it will again go back to dual channel. So it's either the ****ty IMC of Haswell-E or possibly some of the sticks are working at said speed, and the other pair can't.
I would try XMP, then manually set 2666Mhz and Auto timings and try and boot.
Try no XMP, 2666Mhz manually set, and auto timings.
If either of those work, manually tune timings.
https://www.asus.com/us/Motherboards/X99-E/ -- Also make sure you're setting your DIMMS up correctly per the manual or picture shown here.iunlock, Papusan, ssj92 and 1 other person like this. -
I tried XMP only - 16GB dual channel in OS, 32GB dual channel in CPU-Z/HWINFO
I tried Auto 2666Mhz only, doesn't boot, orange LED on MB
I tried XMP +2666Mhz and everything else AUTO, orange LED on MB.
Maybe my 5820k IMC is even worse than yours lol.
The first thing I did was check the MB manual to see which slot configuration I'm supposed to use.
Maybe this may change when I get the Xeon E5-1660 v3.
Edit: Should I try switching places of the RAM?
Right now 2666Mhz sticks are in 1st priority slots then 3200mhz in second priority. Should I try swapping these around?Last edited: Dec 4, 2019Mr. Fox likes this. -
-
I'm in the same boat. I was hoping to have a drop in 3960X to bolster my scores, but the new MB being needed ended those plans.
I'll be getting new GPUs in 2020, but without a CPU that can push it, whether Zen 2 3000 series or a 9900K/S or 18-core chip from Intel, the GPU won't really change much.
At least in 2021, we'll get DDR5 with double memory bandwidth, 7nm Intel chips (if they get their process working) or 5nm Zen 4 (if they can actually make enough for the market and depending if they continue with the 15 month cadence, which would push those chips to 2022, which would upset me) which should be considerable jumps over our current hardware. I'm pretty sure Intel's GPUs won't be competitive at the high end next year, leaving AMD big Navi and Nvidia Ampere (7nm 3080 Ti, which should be cheaper due to a die shrink and maturity of the 7nm node at that time). If that happens, picking up the GPU next year makes sense, then a full platform rebuild the year after.
Too bad my analysis on boost still stands... -
Yeah, I am really sorry that did not happen for guys like you and @Rage Set, seeing that you had an investment that was anticipated to provide an upgrade path for at least one more generation. I know it was a big let down and I would have been equally disappointed. I can understand if they truly needed to make architectural changes that eliminated compatibility to allow the product to function properly, but some communication earlier in the program may have lessened the disappointment for those affected. After all, TR owners were the biggest spenders, having the most at stake, and probably their most devoted customers. Finding out that you're going to be left high and dry in the 11th hour wasn't a cool move, even if the circumstances leading to that were unavoidable. A little proactive communication goes a long way in terms of maintaining good will.Robbo99999, Raiderman, Rage Set and 2 others like this.
-
In my opinion, it was all about the VRM and power delivery. When you move to support only 24-cores and up to 64, an 8x60A vrm is looking shaky and with mild OCs could blow it up without sufficient cooling. They tried saying it was the I/O die and memory channels, but that wasn't an issue with Epyc or mainstream.
I had $1400 set aside for a new chip and maybe a MB. So they lost that sale.
I just hope they listen to my suggestion and announce that the HEDT is their new playground and likely will only support two generations at a time, except in extreme cases where compatibility is limited to one year OR when they can support for three+ generations.
It's not that they did it, it's that they didn't properly prime the market to be ready for it. People understand with Intel, you likely get one gen on mainstream, two for HEDT and servers, except when they have manufacturing problems like broadwell and all the skylake reduxes.
But, moving forward, I've learned only recommend AMD on the basis of their current platform. Right now, Zen 2 is good and I still have to recommend them. For the price point, the only two Intel chips I recommend are the 9900K/S for overclocking and gaming and the 18-core variants at $1K if you need the memory bandwidth or the PCIe lanes. I'm happy the 3950X can compete with and beat the 18-core chips in some cases, but the memory and I/O is a non-starter for me on the platform.
So I'm just going to skip this round and continue to enjoy the slow software optimizations which have helped a bit over time. 16-cores is a lot of power, as you know. I also posted my grievances in the AMD forum and had a good conversation about the I/O, etc. I also posted what I'd like to see for their designs moving forward, which AMD is taking input on that for Zen 4 and 5 right now.
Also, did you see TSMC's yields on 5nm are already reaching about the same as 7nm? That doesn't go into volume until next year and doesn't get used by AMD until 2021 (the first three companies for that process are already announced as Apple, AMD, and HiSilicon, a Chinese company). Nvidia historically waits for a node to mature before jumping on it, while AMD always shrunk first. But nvidia is great at optimizing architecture, so they still don't always need the cutting edge node to compete, as seen throughout the history of them and AMD/ATI.Convel, Robbo99999, Rage Set and 1 other person like this. -
I should hopefully have a few more benches to add before the end of the year. I'm going to be excited to see what you can do with the 7980XE with a single card and I hopefully can do some things with the 7960X with dual cards (no way I'll catch you with a single card). I do have the 3960X incoming and possibly the 3970X shortly after.
-
-
Very nice I'll have to give that a try....will keep you posted.
I showed a buddy a screenshot of the power draws during Quake II and he thought my gpu was broken lol ... then I had remembered that in the normal realm, seeing upper 200's was what most people are used to seeing lol.
Very clean bro... thumbs up.
Neato... yea I love what they did with this classic game... it brings so much life to it once again.
Very nice bro... At the moment I'm building a 3960x beast for an editing and workstation rig...128GB RAM water cooled and 2080Ti all on a custom loop. It's coming along pretty nicely... let me know how things go on your end...
I've ran Plex for ages... it was great, but they started getting too intrusive with user privacy etc... so I'm looking to freeNAS... do you have much experience with it?
... It has been crazy busy...life, family and this time of the year... I totally understand.
Good stuff!
Looking to finally get around to some more benching ... everything is almost set up...the only delay is life lol. @Mr. Fox btw have you pushed the 2080Ti further in fire strike? What has been your highest graphics score to date? 43K's? -
Plex on FreeNAS is why I went to Linux and to Windows. Now, freeNAS is pretty good for storage, but has some quirks that drove me up the friggin wall! Not only that, miniDLNA is not really supported anymore, so I had to hack a working solution together, so to speak. Tons of explanation out there on how to get it to work.
You could also do an emby jailhouse, but between emby and plex, both have their issues AND emby still has some ways it needs to grow for me to switch to it. (I did buy lifetime plex pass which is fairly cheap and emby has about equivalent pricing for their lifetime pass as well).
Currently using MusicBrainz Picard to organize my music collection. Ugh. Great program, just time intensive.
So it really depends on your ecosystem on if I can recommend freeNAS. I don't know if you have amazon firesticks, google chromecast, appleTV, Roku, or android set-top boxes. Then it depends, if using a DLNA like windows in-built, miniDLNA, or some other variant, whether or not there is an app that can see your DLNA. Etc. Plex's change recently came with the deal with Warner Bros., I believe it was, for streaming. They have taken to try to clarify the changes to say they will not be looking through your files other than to offer things like the trailers, previews, and extras, blah blah blah.
Because of that, it is hard to suggest without understanding your internal network and what you plan to use. I primarily used freeNAS to do jailhouses for miniDLNA and Plex. Wasn't too happy with the results after a couple months because it would drop a stream mid-movie. Tons of troubleshooting and headaches. For plex, the problem was I was still using the in-built plex app on my TV and the version was a mismatch for the plex media server, so tons of bugs (now I have a separate device and streams fine with plex, although I have had issues with some of the 1.18.x versions on the media server crashing (the OS doesn't crash, just the PMS app).
So need more info...iunlock likes this. -
Nope, I can't get past about 41.3K. https://hwbot.org/submission/4163632_mr._fox_3dmark___fire_strike_geforce_rtx_2080_ti_36252_marks
-
Robbo99999 Notebook Prophet
This link might be interesting for some of you folks. I remember some discussions in the past that related to this, so I'll just post it here if you wanna read it:
https://www.tomshardware.com/uk/new...ores-coming-in-the-era-of-a-slowed-moores-lawajc9988 likes this. -
It is interesting, but software really needs to catch up, along with anticompetitive crap from all parties ending and heading to open source. Nvidia GameWorks and Intel's math kernel library come to mind. Gomacs and MATLAB use Intel's MKL. Intel has where the software vendor has to apply a patch they made years ago so that when it sees an AMD CPU, it doesn't default to SSE1 and SSE2 instruction sets. But Intel also doesn't make it easy to find documentation on needing the patch nor does it make it known to vendors that they should patch it. This is holdover behavior from the Intel compiler days. gomacs patched their software in October, but with MATLAB, you have to go in and hard set the program to use AVX2 for AMD Zen CPUs. Once you do, the Zen 2 CPUs often beat Intel in the tasks they said do to show off Intel's high points, meaning they kneecap their competitor and say look at how good we are.
But those games will be changing quickly. More interesting is Intel's CXL technology and Gen Z I think it is. CXL allows for memory, storage, and cache coherency between all of your add-in cards, system memory, GPU memory, etc. Everyone is signing on for this. Gen Z is similar, allowing coherency between nodes using PCIe PHY. So they are not necessarily competing tech, rather they are complementary.
So with PCIe 5.0, we get CXL and Gen z. AMD joined the CXL coalition, so this is not an Intel exclusive, rather an industry changing shift, one that Intel needs applauded for. AMD had a coherent standard for PCIe 4.0 PHY (forgot the name), and claims they will continue support and will not abandon it while adopting Intel's standard, but Intel's, on paper, is more robust.
This is leading to something incredible. Being able to scale, access other components memory or storage in fewer steps, thereby speeding up processing. This allows for creation of ASICs for specific tasks, better offloading, etc. That is what I'm excited about for where it may go, depending on good APIs and software to really use it.
In the next 10 years, it very well may be a shift larger than the intro of x64 to the market to getting sandy bridge. Just think of how much more power we have now versus a decade ago. Granted, some stagnation in that decade, but still huge. What I can't see beyond is the 2025 mark, so to speak.Robbo99999 likes this. -
Intel Xeon E5-1660 v3 came yesterday...tempting me to install.
I was supposed to get a Noctua NH-D15 cooler today (yeah..I kinda going crazy with my desktop..this is why I can't have a desktop LOL) today but it looks like it will be delayed.
Should I just throw in the Xeon with the current AIO cooler?
-
Robbo99999 Notebook Prophet
Yes, some changes are within the horizon, changes are afoot......soon to be gone are the days of 4 core CPUs, some RAM, and a hard drive or SSD....it seems to be in the process of some new standards arriving, changing what we currently know as a PC. Yes, I'm being vague because I don't know all the plans, but there's some big changes afoot I think. I would bet if I build a PC this coming year it might be the last one of this 'era' before some of these big changes come about (if I was to keep it 4 or 5 yrs).ajc9988 likes this. -
If you can hold off until PCIe 5.0 PHY and processors able to use CXL, do so. If nothing else, cool things may be possible with GPUs and add-in cards. But some won't shake to the consumer until 2022-23, about the time PCIe 6.0 is adopted, which gen Z and CXL will be compatible with. This is what they were trying to do for true "scalability" and heading toward true heterogeneous platform support.Robbo99999 likes this.
-
Nice, did you get a J batch chip? Should start with a J instead of an L. I would install it just to make sure it works, and then you have comparison of your AIO to the Air setup.
-
3DMark Updated with New VRS Benchmark for Tier 2 (Only Available on NVIDIA Turing GPUs)
"Do note that you'll need to be running on Windows 10 version 1903 or newer for VRS to work."
--Booooo! Boooooo! Nazi bastards.Last edited: Dec 6, 2019jaybee83, ajc9988, Papusan and 1 other person like this. -
I forgot to look. But I went ahead and installed it. Running nice already. Can't wait to OC it.
Goal is to beat you and get #1 hopefully
https://hwbot.org/submission/4278839_talonnbr_cinebench___r20_xeon_e5_1660_v3_4145_marks/
Last edited: Dec 6, 2019 -
-
This tool from Galax works with the Galax HOF Xtreme 2080 Ti 2000W vBIOS. You get a real old school voltage slider and not panty-waist wussboy voltage offset crap like MSI Afterburner and Precision XOC.
Last edited: Dec 6, 2019Robbo99999, sweepersc2, iunlock and 4 others like this. -
Nice! Happy Overclocking! Looks like someone decided to get involved and knocked my old score out, no problem since that score wasn't me maxed yet, I will go back and beat it if I can get over to their house lol.sweepersc2, iunlock, Rage Set and 2 others like this.
-
-
And the desktop purchases continue...
So I figured, I have a 5820k chilling now, GTX 980, 600w PSU (friend gave me), mismatched RAM in my current desktop, extra CPU cooler (replacing my AIO with Noctua NH-D15)...why not build another desktop with the parts?
LOL
Spent ~$140 for a motherboard and RAM. I found the same 3200Mhz RAM on sale for same price as before.
I also ordered this chinese motherboard, hope it's decent.
New PC Specs:
i7 5820k
16GB DDR4 2666Mhz
120GB SSD
GTX 980
600w Corsair PSU
120mm AIO
Generic Mid Tower Case
China X99 motherboard
$78 MB + $65 RAM
Not bad lol
https://www.ebay.com/itm/X99-LGA201...sh=item44493148b6:g:uHYAAOSwTpldqoWm&LH_BIN=1Last edited: Dec 7, 2019 -
Dang, finally managed to beat one of my high scores.
https://www.3dmark.com/spy/9572482 | https://hwbot.org/submission/4297383_
For some reason Fire Strike runs like total crap with the latest release of 3DMark. Has some weirdness like VRMark trash benchmark for some reason, and scores seem lower than normal. -
https://hwbot.org/submission/4298344_papusan_cinebench___r15_core_i9_9900k_2242_cb?recalculate=true
The latest gaming flagship from Dell will need to be oc'd to come on par with stock clocks on the old. Newer doesn't always mean better.
Edit. I totally forgot I had this one put in bookmarks intended for the bot. Maybe it's time I clean up
https://hwbot.org/submission/4298606_papusan_cpu_frequency_core_i9_9900k_5205.1_mhz?recalculate=true
Last edited: Dec 8, 2019electrosoft, Rage Set, jclausius and 3 others like this. -
Good morning fellas,
It looks like the overclocking twinge is in the air.
... must be that time of the year again...
Here is a casual bench with the 2080 Super w/ my 9900KF @ 5.3GHz on the Z390 Dark. I'm pretty impressed with both the CPU and GPU. All I have is an aio on the cpu and the gpu is completely stock with the stock blower. Nothing exotic at all yet...
Fire Strike: 2080 Super / 9900KF
I think this run is the fastest graphics score ( 32241) in the cooling class... just looked on hwbot and it's even higher than the russian fellow's 31554.
Here's the 28056 physics score... 9900KF @ 5.3GHz
@Mr. Fox I'm pretty surprised how well this super card holds its own... not bad.
https://hwbot.org/benchmark/3dmark_...Id=videocard_3175&cores=1#start=0#interval=20
#1 in 9900KF class... second overall so far.Last edited: Dec 8, 2019Rage Set, Papusan, Robbo99999 and 2 others like this. -
Great stuff!
That looks awesome. I'll have to check that out. Thanks for the heads up!
Yes indeed...Happy Overclocking! Looking forward to seeing your results.
I'll have a 5820K on the test bench soon hopefully after I get through some of these projects. It'd be fun to bounce back data...
Congrats Brother...reaching new heights...
Very true and great scores. I forgot what I was able to do with the A51m..I'll have to dig those up. -
When you consider that this machine is a laptop with Z170 and a CPU thermal solution that never contemplated anything more than a 6700K it is off the rails crazy amazing results you are getting. You're doing great with that old beast.electrosoft, Rage Set, jclausius and 1 other person like this.
-
Robbo99999 Notebook Prophet
It's fairly likely I'll buy a 3000 series NVidia GPU next year, and I'll wack that in my current rig if I want to hold out for a better platform. I think I would hold out until at least the next generation of AMD CPUs come out because there's rumours of them being Intel Gaming Killers: https://www.guru3d.com/news-story/ryzen-4000-and-x670-scheduled-for-late-2020.html. In that article it also supports what you say about PCI 5.0 coming late 2021 or early 2022, as well as DDR5. Looks like I'm gonna have to make some tough choices come this time next year: do I wait even longer till 2022 or buy the best gaming platform in late 2020 when I see what AMD has to offer with next gen. -
I think I am going to wait till 2030 for Ryzen gen 8-8950x with 128 cores/256 threads, with an x1270 chipset, and a Radeon XV. Not to mention DDR12, and PCIE gen 13.
Overthinking the "what if I wait " scenario a bit? Pick a time when you are ready to purchase, and bite the bullet. No matter what you do, there will always be something better on the horizon.Convel, Papusan, Rage Set and 1 other person like this. -
Robbo99999 Notebook Prophet
Ha, with your multi core power of your 2700X you might be able to wait till 2030! ;-) No, I don't overthink buying scenarios...I enjoy thinking about buying scenarios! But also I like to buy intelligently at the right time, which is the plan anyway, I believe in trying to ride the crest of wave as early as possible rather than buying something at the end of a cycle - that way you enjoy better performance for longer.....but if my PC wasn't performing like I needed it to and I was having to make compromises then for sure I would upgrade as soon as that would arise, but I hope that won't arise until end of next year for my platform choice - so I would then give myself the option between AMD 4000 series CPU vs Intel...and I would then buy the platform with the greater gaming performance; in the interim I'm almost definitely buying a 3000 series NVidia GPU next year because I have a G-sync monitor and also because I see the next gen to be actually relevant when it comes to ray tracing performance/implementation. So, yeah, I'm thinking it through, I enjoy the process and I enjoy trying to be smart...it remains to be seen! -
I agree to a degree. If it is for business or you have an immediate need, bite the bullet. Otherwise, it is important to plan around known quantities. PCIe changes are one, ram changes are another.
For example, PCIe 5.0 will be available around 2021-22. PCIe 6.0 is published in 2021 and will hit servers likely 18-24 months later, although unknown when consumers may get it. DDR5, though, will double bandwidth and will be available 2020-21. That will likely last longer than PCIe and will likely use a new platform.
So, there is a convergence of new standards being implemented in 2021-22 time frame, making it a good time to buy. Also, if Intel can deliver on 7nm, then you will have another potential inversion of competition between Intel and AMD, meaning pricing could be very good at that time.
As to graphics cards, their cadence is a gen every two years, roughly, with some exceptions. So having large Navi and Ampere on a die shrink from Nvidia on a mature 7nm node, it will be a very competitive time as well. Since Nvidia's monster Turing die was made for 10nm or smaller, the die size on 7nm will be large for 7nm, but much smaller, meaning yields will be up and defects less likely to create unusable dies. That means they can be sold for lower prices (unless both Nvidia and AMD decide not to bring down prices and just mark price to relative performance, which happens a LOT). Intel won't make a splash, so doesn't matter about them next year.
So that is why next year is a good time to buy, although you may want to wait until both companies products are released.
But all of that assumes you can wait. If you can't, or if a great deal lands in your lap, take it! Otherwise, patience is fine.
I mean, Intel isn't innovating on desktop and soonest I would want to grab something would be rocket lake, which is a back port of Willow cove core to 14nm+++++. AMD may also be releasing Zen 3 earlier than mid-summer like they did this year for Zen 2 (that is a bit speculative). Most people already have some version of covfefe refresh or Zen. As such, comet doesn't add anything, cascade-X is a rehash, and Zen 2 is roughly equal performance to Intel. So unless you have a need, why buy now?Robbo99999 likes this. -
Idk what I am doing wrong. Either I got a really crappy CPU for overclocking or I'm doing something wrong.
I can't get this thing stable at 4.4Ghz+ I even tried +0.300 offset on CPU & Cache on BIOS which brings it to 1.37v and still BSOD under load.
Auto everything with multipliers to 43x brings it to 1.27v and it seems to work at this frequency.
My goal was at least 4.5Ghz all core and maybe 4.6-4.7Ghz single core 24/7
Robbo99999 and Mr. Fox like this. -
-
so you would be happier if they downclocked their cpus at stock so u can push it further via manual oc?
Sent from my Huawei Mate 20 X EVR-AL00 using Tapatalk -
Some people ask a similar question about the CPU and I call BS on that as a lame excuse for it being unsuitable for overclocking enthusiasts. People that think in that vein do not really understand or appreciate overclocking, they're just making excuses for a product they like. If it cannot handle meaningful overclocking after elevating the voltage and running it colder, it's a product that sucks at overclocking because it doesn't scale and respond properly under more favorable conditions.
There is a "reference" clock speed and voltage specification that chip manufacturers use to establish a normal baseline for their chips to function well for average users with average motherboards and average thermal solutions. They have to do that to avoid stability and functionality variances in motherboard manufacturing (or GPU PCB design) and variances in operating conditions (thermals) that are outside of their control or influence. If they didn't do that, people running things stock would have a hit or miss experience based on their hardware purchasing decisions and variety in operating conditions. If they pushed all of their chips to the edge of their functionality, lots of people would have instability issues and end up labeling those chips and buggy and unstable, even though their bad experiences might be their own fault for not making good choices in cooling components or buying low quality supporting hardware, or using it in an environment that is too warm.
Having a CPU or GPU not respond well to increases to clocks and voltage that are in excess of the stock reference values no matter how well designed the motherboard (or GPU PCB) and thermal solution might be doesn't mean they are "optimized" or "already overclocked" from the manufacturer. It only means they just suck at overclocking and are not well suited for that. The reasons for that can vary as well. The manufacturer may engineer limitations into the product, or may under-engineer the product in such a way that it is not properly equipped to handle increases in stress levels beyond the conditions for which they intended for it to be run. Referring to those products as being "unlocked" is just a marketing scam to lure overclocking enthusiasts. They might as well be locked because having the clock multipliers unlocked doesn't really serve any useful or beneficial purpose and being able to adjust them has no practical application.Last edited: Dec 9, 2019electrosoft, Raiderman, Rage Set and 2 others like this. -
You evidently are so stuck in your thinking in how products used to be made that you don't understand how boost works.
First, AMD does scale with voltage and cold. GN showed that just making it colder gives scaling. They showed about 25mhz per 7C. You see similar scaling on Nvidia with temp. You also get scaling on voltage. Otherwise the CPUs could not get to 5.3+GHz. Period. So you mischaracterize a product because it doesn't give you higher frequency. That's ignorant.
Intel left so much on the table because they produced so many chips and had a much higher product variance in silicon, which is why silicon lottery would show a 300-500MHz swing from worst to best silicon, all while AMDs binning reduces that to 200MHz swing. You may not like it, but those are facts.
Moreover, since AMDs binning is so good, Intel has been forced to clock their chips closer to Max. See the 9900KS. You only get 100-300MHz OC, depending on silicon and cooling.
Further, even looking at performance after OCing vs AMD boost, depending on workload, AMD is slightly behind or slightly ahead.
Next, the boost algorithm already adjusts for the factors you describe in different deployments, such as inadequate cooling, meaning there is no reason to leave that performance on the table anymore from a commercial perspective.
All you don't like is you don't get a high frequency number due to process and architecture differences, even when AMD is literally taking more and more overclocking records. Why ignore that?
Edit:
Edit 2: cold scale analysis starts at 11:26Last edited: Dec 10, 2019Raiderman likes this. -
No, I do get all of that. I see it and understand it. I am a victim of the same stupid BS as an NVIDIA owner. Boost sucks. Manual overclocking is best. I get it that the control freaks prefer boost and it takes less skill for noobs to feel special. Low frequency gains means crappy overclocking as far as I am concerned. Having to use LN2 for a meager 25-30% frequency gain is stupid. I don't buy the notion overclockers need to change. Hardware manufacturers need to do what their customers want if they want to keep them as customers. I don't like it when the tail wags the dog... I'd say chop off the tail. It's all about the dog, not the tail.Raiderman likes this.
*Official* NBR Desktop Overclocker's Lounge [laptop owners welcome, too]
Discussion in 'Desktop Hardware' started by Mr. Fox, Nov 5, 2017.