The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    *Official* NBR Desktop Overclocker's Lounge [laptop owners welcome, too]

    Discussion in 'Desktop Hardware' started by Mr. Fox, Nov 5, 2017.

  1. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Yep, I tried locking at a lower voltage and frequency, one which I knew was stable when using my normal overclocking procedure (core offset slider), yet it still showed reduced performance - something inherent about locking the voltage causing the card to perform about 2-5% worse for any given clockspeed.

    Yeah, the firmware & drivers seem very complicated for Pascal onwards, there seems to be a lot going on behind the scenes. I wonder what the 3000 series cards will be like from NVidia, I wonder how good the ray tracing will be and how good the extra performance bump will be in conventional rendering (not ray tracing) - I wonder how much traditional rendering power they will sacrifice for raytracing capability space & power, and I also wonder how they will overclock. On that last point, I imagine they won't overclock any better than Turing, if anything I would expect them to be worse as they try to eek out as much performance from the silicon in a stock condition - which seems to be the pattern lately in both new CPUs & GPUs, not such a bad thing for consumers but unexciting for overclockers.
     
    Mr. Fox likes this.
  2. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,235
    Messages:
    39,339
    Likes Received:
    70,651
    Trophy Points:
    931
    Yup, that about sums it up. I think AMD is only going to make it worse since neither their CPUs nor GPUs seem to be worth a damn at overclocking. Hopefully, that will change. Being powerful just isn't good enough. Makes me not want to buy anything at all. Just keep what I already have and find something else to do. Without the fun of overclocking, I have very little remaining interest in computers as a hobby. They become a mundane utility, like a screwdriver, electric drill or air compressor. Useful and necessary, but nothing to get excited about. It only needs to work when you need it to. The rest of the time it is out of sight, out of mind.
     
    Papusan and Robbo99999 like this.
  3. rlk

    rlk Notebook Evangelist

    Reputations:
    146
    Messages:
    607
    Likes Received:
    316
    Trophy Points:
    76
    Find (or start) a FOSS project that interests you and hack on that. That's a hobby you can keep having fun with as you add new features, fix bugs, etc.

    If performance is your thing (it's one of mine), the sky's the limit even if overclocking no longer does much. There are plenty of things out there whose performance is downright awful; if it's free/open source, you have the opportunity to bum the performance, and you'll get a lot more improvement than anything overclocking will accomplish. Way back in the first (well, first-and-a-half) Pentium days -- the 90 MHz P54C, IIRC -- I found a way to use the FPU to copy memory, yielding something like 80% increase in throughput (20 MB/sec => 36 MB/sec) and 10% overall system performance improvement compiling the Linux kernel. Had quite a bit of fun tweaking the exact load/store sequence to get the very best performance.
     
    Mr. Fox likes this.
  4. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,235
    Messages:
    39,339
    Likes Received:
    70,651
    Trophy Points:
    931
    Yes, I like performance and would pay more to get more. Always have done that. However, when everyone gets the same thing that buys the same thing, and you can't overclock it to get more I'm done with it at that point. I'll save my money and buy some cheap piece of trash that I can toss into a dumpster when it craps out and go buy another one. No joy for me in belly button hardware or having a level playing field with no way to do what others can't do, won't do or don't know how to do in terms of overclocking. Meh.

    I do like tweaking at a software level, but that's generally something I do to eek out a little more mojo (only takes 1 or 2 points to win or move up a notch on a leaderboard) only after I have overclocked the hardware to the edge of its functional limits. If that's the only thing left is software and firmware tweaking because the hardware doesn't cooperate, I don't think that I would bother with it. Definitely not an acceptable substitute. To me that is sort of like eating crumbs off the floor when there is a freshly baked loaf of bread in the pantry.
     
    Last edited: Aug 13, 2019
    Papusan and Rage Set like this.
  5. makina69

    makina69 Notebook Consultant

    Reputations:
    141
    Messages:
    227
    Likes Received:
    361
    Trophy Points:
    76
    OCTT
    small packages

    4,9ghz llc4 1,370v bios

    [​IMG]
     
  6. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,647
    Trophy Points:
    931
    All depends on AMD next year. Nvidia won't change as long AMD doesn't want play the same game.
     
    Vasudev likes this.
  7. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    How do you mean, what do you see as the possible scenarios?
     
  8. Rage Set

    Rage Set A Fusioner of Technologies

    Reputations:
    1,611
    Messages:
    1,682
    Likes Received:
    5,068
    Trophy Points:
    531
    Easy. Both Intel and AMD fight over the mainstream GPU market while Nvidia releases another refresh on with a smaller die, which likely keeps them on top.

    AMD will have a 2080 TI killer next year, so will Nvidia.
     
  9. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    If you see my original post, I posted some specific musings/wonderings over NVidia's next release, not generalisations on NVidia/AMD rivalries, so what you're saying is not the substance of my musings (mostly).......but yes.

    I don't think anyone knows the answer to my wonderings other than NVidia, so I guess I wasn't really looking for answers, but those elements I mentioned are the ones I'm interested in, I'll be interested to read those product reviews when they come out and to see which decisions they have made (whenever the hell that is) - and then maybe I'll buy one.
     
    Last edited: Aug 14, 2019
    Rage Set likes this.
  10. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,235
    Messages:
    39,339
    Likes Received:
    70,651
    Trophy Points:
    931
    I doubt AMD will release a GPU that can compete with an overclocked GeForce GPU. I hope I am wrong because the decade-long trend of NVIDIA annihilating them needs to be broken, but overclocking doesn't seem to be something they understand or care about. They seem primarily focused on capturing mainstream gamers that want decent gaming performance at the lowest price possible and don't do any overclocking. If they do release a GPU that beats a stock 2080 Ti, but still doesn't overclock well, (like their current GPUs,) then they will have accomplished nothing that I view as valuable and it wouldn't be something I would waste any of my money on. But, we can and should hope they do. I know I hope so despite my skepticism that they will.
     
    Rage Set likes this.
  11. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    In terms of % performance gained by overclocking through the generations, where do we stand?

    8600M GT - I got a 75% overclock on it (if I remember)
    Fermi GTX 560M - I got 22% overclock
    GTX 670MX with modified vBIOS I got 87% overclock
    GTX 1070 - I got 16% overclock over founders edition.

    I can't really use those for comparison though because the first 3 are artificially gimped laptop cards, so there's more headroom available for overclocking if you had the cooling, or vBIOS mods to release the extra potential. I suppose you'd have to compare overclock potential of desktop cards through the generations. Either way my latest card is the one that's overclocked the least. Does your 2080ti overclock less than your GTX 1080ti, I don't think you've had other desktop cards right? I'm kind of expecting it to be a downwards trend in overclock headroom through the generations. It's a pity that monster cooling is not enough to guarantee a good overclock.
     
    Mr. Fox likes this.
  12. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,235
    Messages:
    39,339
    Likes Received:
    70,651
    Trophy Points:
    931
    The 2080 Ti and 1080 Ti overclock are about the same percentage over stock on core. The memory overclock seems a little more robust on the 2080 Ti. This is primarily a cancer firmware problem, not a hardware problem for NVIDIA. ( @Prema can confirm that.) But, that's nothing new either. Not sure what the reason/excuse is for AMD GPUs sucking at overclocking. Firmware has always been the issue with NVIDIA GPUs, especially their artificially castrated mobile cards. NVIDIA has been crippling their stuff and manipulating performance for as long as AMD has been lurking in the shadows as the second fiddle option for GPUs. That's because they can get away with it, so they take advantage of it. We need AMD to get with the program on GPU overclocking and that will (hopefully) motivate NVIDIA to knock off their manipulation and performance metering nonsense.

    On my 7960X I bench at an ~89% overclock, which is truly amazing. It was also a very high overclock percentage over stock with my 7700K and 8700K CPUs. AMD and NVIDIA need to take some cues from Intel on this, as Intel have always seemed to have a soft spot in their big blue corporate heart for overclockers. I hope that never changes. As Ryzen improves and Intel has to work extra hard at holding their performance crown, this is going to be what distinguishes them from AMD, so I suspect they will keep up the good work on that.
     
    Last edited: Aug 14, 2019
    ajc9988 and Robbo99999 like this.
  13. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    hm, difficult to predict with regards to Intel cpus staying overclocking friendly. if AMD keeps pushing them it could also be likely that Intel will leave less and less overclocking headroom on the table in order to still beat AMD. otherwise they would have to work twice as hard to provide both great stock AND overclocking performance. same thing could be possible for nvidia if amd started pushing the gpu envelope (IF ;) )

    Sent from my Xiaomi Mi Max 2 (Oxygen) using Tapatalk
     
    ajc9988, Convel and Robbo99999 like this.
  14. rlk

    rlk Notebook Evangelist

    Reputations:
    146
    Messages:
    607
    Likes Received:
    316
    Trophy Points:
    76
    So I guess there's something I'm still not seeing about this whole overclocking issue. What's the difference between overclocking and getting the same performance at stock clocks? Is it a feeling of getting something for nothing? The sense of somehow being better than everyone else who can't do it? Because if you really look at it, there's not a lot of difference between a vendor designing a chip to always run as fast as possible and one deliberately designing a chip to run slower and a customer then figuring out how to restore full performance. If a chip can run at 5 GHz, and the vendor marks one SKU to run at 4.5 GHz and the other to run at 5, and they're the same silicon (not actually binned differently), what's the difference?

    In the bad old days, Intel sometimes left a lot of performance on the table for market segmentation reasons. The old Celeron 300A ran just fine at 450 MHz -- a 50% clock boost, albeit with a smaller cache than the Pentium II (?) equivalent that ran at a similar clock speed and cost a lot more. But it's a lot harder to get away with that now that there is competition. If Intel deliberately downclocked something they'd simply be losing out unnecessarily to AMD, and likewise the other way around. The more competition there is, the less you'll be able to overclock for the very simple reason that the market will demand the best possible performance out of the box.

    The whole idea that Intel is "overclocking-friendly" and AMD isn't somehow strikes me as backward given their history. AMD has generally -- and now does across their line -- refrained from locking frequency, voltage, or multiplier, while Intel generally has either locked or greatly restricted multipliers and often frequencies except for a very few SKUs. AMD's not unfriendly toward overclocking; they just do a very good job in the hardware and firmware of squeezing everything out of the chip. They don't leave as much performance on the table as Intel does in some of their 9900K's.

    If you really want to try to overclock AMD parts, I suspect the ones to go for are the 3700X and the 3600. Those are probably just rejected 3800X's and 3600X's, but maybe if you get clever enough you'll be able to squeeze a bit something more. And the 3700X is quite a lot cheaper than the 3800X.

    Now, given that AMD still hasn't quite caught up in single thread performance, if you can squeeze more of that out of a 9900K, you'll gain a little something. But that's pretty much a special case, and I really doubt that the Ice Lake/Tiger Lake equivalent will be as amenable to overclocking, because Intel simply won't be able to afford giving any performance up, since Ryzen 2 is not the end of the road for AMD. And as long as Lisa Su is in charge, I rather doubt AMD will gratuitously forfeit performance.

    To the extent that nVidia GPUs can be overclocked, I see that mainly as their believing that AMD can't compete (yet) with their high end devices. But given all of the "factory overclocks" around, I suspect they deliberately hold back on the rated clock speed to allow their downstream IHV partners room to sell something. The Turing "Founders Edition" SKUs run at the rated clock speed, which I think is really an underclock from what those parts are capable of. If they ran them flat out, their IHVs wouldn't have nearly as much room to differentiate their offerings. But if AMD really ramps up the pressure, I expect a lot of that will evaporate.
     
    ajc9988 and Robbo99999 like this.
  15. rlk

    rlk Notebook Evangelist

    Reputations:
    146
    Messages:
    607
    Likes Received:
    316
    Trophy Points:
    76
    BTW, I'm not a stranger to overclocking myself. Aside from the aforementioned Celeron 300/450A, my previous system was an i7-5820K (which is reputed to very amenable to overclocking). In the course of time, the 5820K at stock was no longer quite fast enough to run my increased workload, but at 4.2 GHz it could handle it (at 4.4, it simply wasn't stable). The other meaningful options -- 5960X, 6950X -- were simply too expensive for me to want to drop money on a dead end, and while there are quite a few Xeon E5's on eBay, they aren't all that cheap and have awful single core performance -- so I tried OC'ing the 5820K. Eventually I think it became just too unstable to be usable, and I had to replace it sooner than I wanted to (I hoped to replace it this fall, with a Ryzen 2). The 2700X is quite fast enough for now, although I wouldn't mind some more performance.
     
    ajc9988 likes this.
  16. Rage Set

    Rage Set A Fusioner of Technologies

    Reputations:
    1,611
    Messages:
    1,682
    Likes Received:
    5,068
    Trophy Points:
    531
    I'll get this out of the way first. While Intel has been abusing their 14nm process for the past several years, it does show how ahead of the curve Intel's 14nm process node was, that they are still able to push the tech. Yes, the IPC for Intel's 14nm process node has not improved greatly over the years and most of their processor tech improvement comes at the addition of more cores. Yet a lot of people seems to forget that it took AMD to develop a new core architecture and produce it on a new process node in order to match and succeed Intel's current offerings. Intel's struggle with 10nm is well documented. I doubt that Intel will be competitive with 10nm on the DT next year. That is why they are concurrently developing their 7nm tech with 10nm. When that comes to market, I hope AMD has an answer. For now, AMD has the overall crown and they do deserve the accolades.
     
    ajc9988, Convel and Robbo99999 like this.
  17. ssj92

    ssj92 Neutron Star

    Reputations:
    2,446
    Messages:
    4,446
    Likes Received:
    5,690
    Trophy Points:
    581
    Good quality. What mic are you using?
     
    Rage Set likes this.
  18. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    About the cancer firmware problem for overclocking & NVidia with the 2080ti - how do you mean, are you not getting the full performance that you should be getting from your percentage overclock you have on the 2080ti, or is the firmware preventing higher clocks (maybe not enough voltage allowed, although I think you're already at 1.2V aren't you?)?

    89% overclock on your CPU (7960X), that's very cool! Actually, I think this is where the direction lies for overclocking enthusiasts and the companies that produce the hardware - larger chips that require more cooling to allow them to run at the max frequency that the silicon & architecture can support. That's the problem with small chips that are relatively easy to cool - you bump into the frequency voltage limits of the silicon, but with larger chips you hit the limits of cooling (TDP) way before you get to a max supportable frequency. Both for NVidia and Intel, I think they need to be using some bigger silicon (or multiple normal sized chips/chiplets) if overclocking is going to continue to be sucessful - allowing people with exotic cooling (good water and upwards) to take advantage of the potential that is there. Unfortunately that's an expensive way to manufacture products, which is linked to what @rlk is referring to in his post below:

    Yeah, I agree with that, I can see overclock headroom diminishing as market competition increases as all companies try to extract the max performance from the least production costs (max stock performance per mm squared of silicon). And same as you @jaybee83 below.

    AMD done good! Yeah, I'd say AMD has the overall crown right now, but they're not the gaming king still, and for other folks that for whatever reasons place value on single core performance then Intel is still in the lead here, especially when you take CPU overclocking into account.
     
  19. Rage Set

    Rage Set A Fusioner of Technologies

    Reputations:
    1,611
    Messages:
    1,682
    Likes Received:
    5,068
    Trophy Points:
    531
    Thanks! I'm using a Shure SM7B mic.
     
    Convel likes this.
  20. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,647
    Trophy Points:
    931
    More milking? Bro @Mr. Fox you want a 2080Ti Super card?

    [​IMG]

    Nvidia Could Be Working on Another GeForce RTX GPU Tomshardware.com | Aug 16, 2019
    The GeForce RTX moniker implies that the unknown graphics card is most likely aimed towards the gaming market. That's practically the only clue we have at the moment. So, it could be something like a GeForce RTX 2080 Ti Super or GeForce Titan RTX Black. However, we're more inclined to the first since the GeForce Titan RTX already employs a maxed-out TU102 die.
     
    Robbo99999 and Rage Set like this.
  21. Rage Set

    Rage Set A Fusioner of Technologies

    Reputations:
    1,611
    Messages:
    1,682
    Likes Received:
    5,068
    Trophy Points:
    531
    Could it be a Titan RTX without the 24gbs of memory? The question will then be, why buy it? Unless the Titan RTX wasn't using the full TU102 die. That would make a lot of Titan RTX owners angry. On the other hand, I welcome a new version of the 2080 TI and that is coming from an owner of several RTX 2080 TI's. Why? Because I doubt they will be an actual improvement over my KPx cards and if you were to buy this TI Super (or whatever they are going to call it) now, it will be superseded shortly next year.
     
  22. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,755
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    I doubt that distinction will be good enough. OCing is much smaller a community than gamers, which is a very small community of desktop. Intel is already doing the KS series, which is clocked all cores to 5GHz stock. Do you really think that chip, which is basically the same as the K/KF series is leaving headroom for some wondrous OCing? Doubtful. Instead, I have to agree with @jaybee83 on this one.

    Later this year, we get Cascade-X on the same socket as Skylake-X, but still 14nm. Not sure there is much to be gained here.

    Then Comet and Rocket are said to be 14nm, and Intel has said they are using 14nm through 2021. If true, they will not leave anything on the table vs AMD at 7nm+ node, then 5nm in 2021. TSMC also showed off a massive silicon interposer.

    [​IMG]
    https://www.tomshardware.co.uk/tsmc-interposer-processor-hbm-moores-law-not-dead,news-61442.html

    If AMD uses that with the N5P from TSMC, they can ditch IF and decrease latencies, which would put more pressure on Intel. So I'm not hopeful of Intel leaving room on the table for OCers moving forward.
     
    jaybee83 likes this.
  23. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,755
    Messages:
    6,121
    Likes Received:
    8,849
    Trophy Points:
    681
    Yeah, there is no 10nm desktop planned next year. It is 14nm Rocket Lake, after Comet lake drops in Q4 or Q1 2020. As for 7nm, Intel needs to deliver IN 2021. If they do not, they are in a really really bad situation, including possibly dumping their fabs.

    Intel 7nm will ALSO only be used for mobile, most likely, in 2021, not desktop. The slides from Intel show 14nm, 10nm, AND 7nm being used in 2021. That is when people suspect Intel moves its desktop chips to 10nm, which will NOT be enough. Intel moves to Ice Lake - SP chips allegedly in Q2 2020. We will see if they deliver that on time before bets can be placed on desktop 10nm chips at some point or if 7nm will be delivered.

    As to AMD and what they are doing, take a look at the size of that silicon interposer TSMC is making (2400mm^2). If AMD moves to that at 5nm, they could put 16x8-core chiplets on it, move the I/O die to the interposer, along with logic routers (IF wouldn't be needed as the interposer replaces it for core chiplet comms, while over PHY, PCIe 5.0, CXL, and Gen Z will be ready for coherent connects), while being able to put 64GB of HBM2, 96GB of HBM2E, or some amount of HBM3 on the interposer as well, while using DDR5 (set for 2021) to keep the HBM fed.

    The interposer alone, according to TSMC, can give a 7-10% performance boost (although they didn't give the comparison point).

    You then have the performance uplift of the 5nm node shrink, plus the improvements in IPC for Zen 3 and Zen 4 to add to it.

    Take it all together, I'm not sure AMD is sweating Intel's 7nm until Intel proves they can deliver, and even then, it won't be in Server until 2022 and possibly later than that for desktop.
     
    jaybee83 and Rage Set like this.
  24. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Weird, I saw something recently where an NVidia representative (can't remember who), said that there was categorically not going to be a 2080ti Super. Don't know what to think, I don't think it's happening, I think that's it for NVidia cards until 7nm.
     
    Last edited: Aug 16, 2019
  25. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,647
    Trophy Points:
    931
    With 2080 Super nearer 2080Ti, I expect Nvidia want a new Super card on top. This to be sure AMD still will be behind. A nice way to extend the time until 7nm is out. Milk on old is Gold :)
     
    Vasudev likes this.
  26. Vasudev

    Vasudev Notebook Nobel Laureate

    Reputations:
    12,045
    Messages:
    11,278
    Likes Received:
    8,815
    Trophy Points:
    931
    2080 super Ti and Nvidia RTX Titan King/Wizard should amp up NGreedia sales until Intel or AMD comes with a superior GPU that blows RTX out of the water.
     
  27. bennyg

    bennyg Notebook Virtuoso

    Reputations:
    1,567
    Messages:
    2,370
    Likes Received:
    2,375
    Trophy Points:
    181
    Well they need something, they can't claim "end of crypto sales boost return to normality" the next time they have heavy YoY drops in their financials, but super-iterative gains in niche high end SKUs that have no direct competition from AMD ain't it
     
  28. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,647
    Trophy Points:
    931
    Most likely to stop Amd's coming Nvidia Killer. Then buy themself enough time to push out 7nm. No need for brand new chips before they need it. And Intel won't have a High end gaming card ready for 2020.

    AMD Readies Navi 23 High-End ‘NVIDIA Killer’ GPU For Radeon RX Flagship, Arrives Next Year With Ray Tracing Support
    "It will be interesting if these options compete against the current king of the hill, the GeForce RTX 2080 Ti (SUPER :D) or whatever NVIDIA has to offer in 2020 (GeForce 30 Series?)"
     
  29. Vasudev

    Vasudev Notebook Nobel Laureate

    Reputations:
    12,045
    Messages:
    11,278
    Likes Received:
    8,815
    Trophy Points:
    931
    At the max it'll 2080 Super Killer competitor but not Ti or Quadro.
     
  30. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Ah, well I don't think NVidia would want to let AMD spend too long as king of the hill, so they gotta release something if 7nm wasn't ready any time soon after that, so maybe a 2080ti super after all, even though they said they wouldn't do it - we'll have to see.
     
    Vasudev and Papusan like this.
  31. makina69

    makina69 Notebook Consultant

    Reputations:
    141
    Messages:
    227
    Likes Received:
    361
    Trophy Points:
    76
    i9 9900k 5ghz uncore 47

    [​IMG]
     
    ajc9988, Rage Set and Papusan like this.
  32. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    New driver being released by NVidia in 1.5hrs time from now - up to 23% faster in some existing games (e.g. Battlefield V)!
    https://www.guru3d.com/news-story/n...clusive-geforce-experience-download-only.html
    I wonder if this means that BF1 will get some improvements too, it's the same engine as BFV I think. I wonder if this will mean performance improvements for Pascal and older architectures too, or just Turing (the slides at the link above only show Turing performance).

    Also, this driver has New Ultra-Low Latency Options For Faster Input Response. And here's some stuff copy & pasted on that (the "Ultra Mode" is the new option to try):
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    "The NVIDIA Control Panel has -- for over 10 years -- enabled GeForce gamers to adjust the "Maximum Pre-Rendered Frames", the number of frames buffered in the render queue. By reducing the number of frames in the render queue, new frames are sent to your GPU sooner, reducing latency and improving responsiveness. With the release of our Gamescom Game Ready Driver, we're introducing a new Ultra-Low Latency Mode that enables 'just in time' frame scheduling, submitting frames to be rendered just before the GPU needs them. This further reduces latency by up to 33%. Low Latency modes have the most impact when your game is GPU bound, and framerates are between 60 and 100 FPS, enabling you to get the responsiveness of high-framerate gaming without having to decrease graphical fidelity.

    To select a Low Latency mode, open the NVIDIA Control Panel, head to "Manage 3D Settings", and scroll down to "Low Latency Mode". Three options are available:

    • Off: The game's engine will automatically queue 1-3 frames for maximum render throughput
    • On: Limits the number of queued frames to 1. This is the same setting as "Max_Prerendered_Frames = 1" from prior drivers
    • Ultra: Submits the frame just in time for the GPU to pick it up and start rendering
    Our new Low Latency Mode is being released in beta with support for all GPUs in DX9 and DX11 games (in DX12 and Vulkan titles, the game decides when to queue the frame)
    -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
     
    Last edited: Aug 20, 2019
  33. makina69

    makina69 Notebook Consultant

    Reputations:
    141
    Messages:
    227
    Likes Received:
    361
    Trophy Points:
    76
    what do you mean .. will i have more fps playing bf5?
     
  34. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Yeah, the slides at that link above are showing a 5-7% performance improvement for Turing cards in BFV - in other games it can be greater or lower. I'm hoping this will extend to Pascal cards too.
     
    makina69 likes this.
  35. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Tried out that new NVidia driver that I linked above. The same or slightly lower 3DMark & Timespy results. Tried in BF1 with Ultra Low Latency selected in NVidia driver - didn't actively notice any reduction in latency, but think I saw some slightly lower framerates in some places and perhaps a bit more stutter, I didn't play any better with it selected (but hard to tell because each lobby is different with different players). I think I'm gonna put the Latency settings back to default in the NVidia driver and wait for some official sites to do a latency and framerate analysis of this new driver/feature, because it seems that we might lose smoothness and framerate by activating the Ultra Low Latency option.
     
  36. makina69

    makina69 Notebook Consultant

    Reputations:
    141
    Messages:
    227
    Likes Received:
    361
    Trophy Points:
    76
    I can't download the new drivers

    404-not found
     
  37. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,647
    Trophy Points:
    931
    Arrrrbol and makina69 like this.
  38. ssj92

    ssj92 Neutron Star

    Reputations:
    2,446
    Messages:
    4,446
    Likes Received:
    5,690
    Trophy Points:
    581
    I just scored three desktops for cheap. Will be selling one to a friend. But other ones are:

    AW X51 R2 4790K + GTX 960
    CyberpowerPC: i7 5820K + GTX 980

    will benchmark hopefully soon
     
  39. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    @Mr. Fox , this is what you want in your next PC!
    https://www.notebookcheck.net/Cereb...t-with-chip-larger-than-an-iPad.431193.0.html

    But joking aside NVidia and AMD need to start making bigger chips and cramming in more cores, some single card GPUs in history show that 500W can be cooled on one card - like the AMD 295x2 and the NVidia Titan Z. For flagship cards I think they should be trying to fill a 500W budget for absolute maximum performance, and you can't fill 500W with the size of the current chips because you reach the max 'safe' voltage & max frequency first. Maybe they need to look at efficient chiplet type style GPU designs so that they can combine a number of 'cheap' smaller chips to make a massively performing yet economically viable (not ridiculously expensive) GPU that can reach 500W.
    (Here's a wattage performance table showing the power consumption of a number of GPUs over some history of time: https://www.guru3d.com/articles_pages/powercolor_5700_red_devil_review,8.html)
     
    Mr. Fox likes this.
  40. Rage Set

    Rage Set A Fusioner of Technologies

    Reputations:
    1,611
    Messages:
    1,682
    Likes Received:
    5,068
    Trophy Points:
    531
    The KPx cards routinely hit 500W or higher with the XOC vbios. I could only imagine what would KP do with a chip that starts at 500W.
     
    jaybee83, Papusan and Robbo99999 like this.
  41. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    That's right, 'normal safe' voltages for a 500W flagship card and then with Kingpin cards at further extended voltages for higher than 500W. Yeah, there's just plain not enough transistors on these high end cards, they're holding back on what is capable of being fitted on a single card. Sure, they could still keep the 250W cards and below, but just expand the lineup to fully make use of what is possible on a single card. Would probably have to be a multi GPU card or a new architecture of a number of chiplets because otherwise massive single die silicon would probably be too expensive. I just don't think they're currently maxing out what is possible when it comes to using the cooling that could be available to a single card - I think they should try to use that whole 500W budget (based on the fact that 500W cards have existed in the past).
     
    TBoneSan, Rage Set and Papusan like this.
  42. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,149
    Trophy Points:
    931
    unfortunately, that would just be a perfect excuse for nvidia to charge 5k for the 500W flagship gpu chiplets or not!

    Sent from my Xiaomi Mi Max 2 (Oxygen) using Tapatalk
     
    Robbo99999, Rage Set and Mr. Fox like this.
  43. rlk

    rlk Notebook Evangelist

    Reputations:
    146
    Messages:
    607
    Likes Received:
    316
    Trophy Points:
    76
    i7-5820K was a pretty nice chip in its day, but the Ryzen 2700X is much faster, particularly on parallelizable workloads, but even the single thread performance is better. However, I didn't have a lot of luck overclocking my 5820, perhaps because by the time I tried it was already quite old. I expect that a Ryzen 3600X or 3700X would completely blow it out of the water.
     
  44. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,235
    Messages:
    39,339
    Likes Received:
    70,651
    Trophy Points:
    931
    I've never been crazy excited about die shrinks like some folks are. Smaller has its place, but I generally find more value, performance and delight in bigger. I don't want it to use less power, less voltage or be smaller. I just want it to run like a banshee on acid and crush anything that gets in its way. Small stuff also seems to be more fragile and usually runs hotter. Before we even knew anything about them I was really excited about Threadripper CPUs because they were so stinking massive. I love that.
     
    TBoneSan, Robbo99999 and Rage Set like this.
  45. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,647
    Trophy Points:
    931
  46. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,235
    Messages:
    39,339
    Likes Received:
    70,651
    Trophy Points:
    931
    Man, I totally forgot about these old benchmarks. Haven't run them in a very long time. They both still look really good. This is with the GPU running vBIOS defaults.

    Tropics.jpg Sanctuary.jpg
     
    Last edited: Aug 21, 2019
    jaybee83, Rage Set and Papusan like this.
  47. ssj92

    ssj92 Neutron Star

    Reputations:
    2,446
    Messages:
    4,446
    Likes Received:
    5,690
    Trophy Points:
    581
    Ah I miss dual GPU cards.

    I had 295 and 690.

    Titan Z and 295X2 are great examples that we can have 2080Ti X2 even today.

    I got mine to 4.5Ghz before on a different desktop. I'm expecting the same if not better results this time since I have a better motherboard.
     
  48. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,346
    Messages:
    6,824
    Likes Received:
    6,112
    Trophy Points:
    681
    Ha, I know, that wouldn't be on! That's NVidia's and the market's fault though, they're just not maxing out what is possible, which is...annoying.
     
  49. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,712
    Messages:
    29,847
    Likes Received:
    59,647
    Trophy Points:
    931
  50. makina69

    makina69 Notebook Consultant

    Reputations:
    141
    Messages:
    227
    Likes Received:
    361
    Trophy Points:
    76
    I wanted to ask if there is anything in the bios asus that is necessary to play in order for the oc to be more stable ... or what things are more important to play in the bios?

    asus gene xi z390 i9 9900k
     
← Previous pageNext page →