The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    Nvidia RTX 20 Turing GPU expectations

    Discussion in 'Sager and Clevo' started by Fastidious Reader, Aug 21, 2018.

  1. Stooj

    Stooj Notebook Deity

    Reputations:
    187
    Messages:
    841
    Likes Received:
    664
    Trophy Points:
    106
    Technically it was DirectX and Vulkan which provided ray-tracing. Both Nvidia and AMD then released their own toolkits at the same time to accelerate development on workstation cards (as it was better accelerated on Tensor Cores, even if not a realtime).

    The 1st implementation often isn't the best, but it always pays to be first in implementing technology.

    Even if Nvidia don't get RT perfect right now, they're now in the driver's seat and can direct the technology from here. As the only real-time RT hardware supplier right now, they can call the shots on how it develops and gets optimized, from game developers and engine developers right down to the lower levels like DirectX and Vulkan.
    This means they will know the direction RT will take ahead of time and can design future generations accordingly. Same thing happened with GeForce3 which wasn't really perfected until GeForce4, but for anything which implemented the new DX8 shader model at the time, the GF3 stomped all over older cards.

    The other thing to consider, is Ray-Tracing will always be more costly than raster rendering. There's literally no "right" time to implement it because it will always be slower than the equivalent raster rendered game.
    e.g. Maybe the next generation"3080ti" in year 2020 can run ray-tracing 4K@60fps for current 2018 games. But then people will be saying "why can't it do 4K@120fps" like raster games. Then 2 years after that the argument will just turn to "why can't it do 8K@120fps" etc.

    Either rip that band-aid off now and get it done with, or we'll be stuck with raster rendering for the rest of time.

    And gosh darn it! I want working mirrors in my racing games already! :D
     
    Vistar Shook, Papusan and hmscott like this.
  2. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yes, both Nvidia and AMD, and Microsoft and others were working on this in the background for quite a while before releasing the developers tools and GPU's to start creators on the path to make games.
    That is so not so in real life. Where do you get that idea?

    I've been on the cutting edge of so much and things move incredibly slowly, I have worked with pioneers in new technology and they are all no longer where they were and the technology they worked on is still evolving.

    VR and AI have been around forever. I've got stuff in boxes that was released in the 80's and 90's and those companies are no longer around.
    Again, where do you get these idea's?, so not true.

    Nvidia isn't in control of anything, they have a partial "RT" implementation in hardware that isn't going to be successful at 100% RT, they said it themselves, this is fake / hybrid RT sprinkled through games so as to not completely kill performance.

    Nvidia has some new eye candy, and has bamboozled people again, what's new about that?
    Nvidia has put out a 1st attempt, and it's easy to knock down the 1st attempt, especially when it's rushed out and not ready for prime time.
    So true, and "no right time" applies to "right now" as well. :D
    Stick with the $500 used ($650 new) 1080ti's in desktops and GTX in laptops, they'll do you while Nvidia, AMD, and Microsoft / etc get things working well enough to ship in a couple of years.
     
    Last edited: Aug 29, 2018
    Stooj likes this.
  3. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
  4. Stooj

    Stooj Notebook Deity

    Reputations:
    187
    Messages:
    841
    Likes Received:
    664
    Trophy Points:
    106
    My bad. Not for everything in life, but I meant for graphics technology.
    Both sides have been first to many things which have become dominant. From Nvidia's side, G-Sync comes to mind. They also drove AA features heavily which allowed them to stay somewhat competitive even when ATI had them beat many years ago.

    That being said, I must admit that it helps enormously when you're the dominant player in said market. ie AMD implemented 64bit x86 instructions first but it didn't really pay off until later.

    To a degree, they've already done it. As per your video link, Nvidia had BF5 developers working with slow Volta cores from the beginning, then once Turing GPUs were available, optimisations were made based on performance of Turing cards. Assuming this has happened with other RTX enabled games, they're basically driving optimisations already.

    It doesn't really sound like "hacking away" at RTX, as if you're losing anything. Basically where it is dialed back is ray distance and secondary/tertiary ray projections. Both of which are directly implemented by DirectX/Vulkan purposely to stop infinite reflections. If BF5 does indeed implement options for varying ray count and bounces, then at least it should scale into future generation gpus.

    One thing mentioned in that video (around 13:40), which is rather significant from a performance perspective, is BF5 is not working with RT and rasterizing in parallel. They are ray-tracing after the raster render is already complete, which blows out frame-times significantly, especially since you can't denoise that output until it's finished. If Nvidia's "Turing Frame" time graph is to be believed as the "ideal" render pipeline, you're looking at something like 30% of the frame time being occupied by the RT process if you were to process linearly.

    It may be the case that games which have implemented RT part-way through development are much more difficult to implement parallel RT processing due to the way their engines are built. Hopefully engines designed form the ground up for that approach would run a good chunk faster (ie Unreal 4/Unity etc).
     
    Vistar Shook, Falkentyne and hmscott like this.
  5. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Nvidia is driving their own hacked hybrid version of RT, and optimization is further hacking it from ideal, that's what happens when a technology isn't ready for prime time - there is no headroom to play with and Nvidia has already lopped off the resolution to "make it happen" @ 1080p.

    We'll see how much people "love it", if they've bought it they usually talk it up - who wants to admit to buying a pig in a poke - if reviewers review it independently, which looks extra hard to find this time what with Nvidia picking the reviewers, maybe we will get a perspective of the truth.
    http://forum.notebookreview.com/threads/nvidia-thread.806608/page-62#post-10787971

    It's still too early to tell just how bamboozled people are, it'll take time to come out. More than anything, RT effects seem to be more of a distraction than an enhancement.

    Either way it could work out well for Nivida, they decimate their huge stock pile of unsold 10 series GPU's and everyone else gets stuck trying to sell those old cards no one wanted or the new cards no one wants.
    http://forum.notebookreview.com/threads/nvidia-thread.806608/page-61#post-10787604

    If the mountain of unsold 10 series GPU's drop in price and pull away too many RTX buyers, that would be bad.

    If the disappointment in RTX cost / performance causes people to switch to 10 series GPU's that could cause the 10 series price to go up again - and then neither generation sell enough to whittle away at the two mountains of GPU's.

    Ending up with prices slashed for both gen's to keep Nvidia from getting stuck with *2* mountains of GPU's.

    It should be interesting. Maybe for a while Nvidia 10 series GPU's will actually be affordable again. :D
     
    Last edited: Aug 30, 2018
    Stooj likes this.
  6. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    7nm Vega would likly struggle against the 1080ti to he fair. When nvidia shrink this to 7nm it will fly forward.
     
    Vistar Shook likes this.
  7. RampantGorilla

    RampantGorilla Notebook Deity

    Reputations:
    72
    Messages:
    780
    Likes Received:
    313
    Trophy Points:
    76
    Read about power gating. It's very probable that for laptop GPUs nvidia will use dies with defective RT and tensor cores and will disable those portions of the chip, leaving behind the CUDA cores.
     
    hmscott likes this.
  8. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,691
    Messages:
    29,824
    Likes Received:
    59,553
    Trophy Points:
    931
    NVIDIA : RTX 2080 Ti, 2080 & 2070 Are ~40% Faster vs Pascal in Gaming
    Expect a Larger Performance Gap Between RTX 2080 Ti & GTX 1080 Ti vs. RTX 2080 & GTX 1080...

    With a stripped down aka (castrated) GTX 1180 graphics for notebooks vs. RTX 2080 for desktops.... Maybe the performance hit won't be as hard due Nvidia most likely won't go for RTX branding on Mobile graphics. Time will tell.

    Edit.
    My guess... The successor to 1080Max-Q will perform as regular Gtx 1080 and the Mobile 1180(N) will get performance around 8-10% below 1080Ti or best case equal.
     
    Last edited: Aug 30, 2018
    Vistar Shook likes this.
  9. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    IDK if they would use defective dies, create dies without those areas, or simply disconnect otherwise good dies RT / TC sections.

    However it's done that would still be a high power load like the current 1080 + added Cuda cores and perhaps better power efficiency with the newer process. Since the 1080ti didn't have a mobile equivalent last series 10 models, maybe there will be a similar limit that falls somewhere between 10 series 1080 and 11 series 2080? Not a full 2080 cuda 1:1 in mobile is what I am thinking.

    No official news yet, so it's all down to speculation and hope. :)
     
  10. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    So, "not now" then?, "later" for the buy recommendation on RTX?
     
  11. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    Wait and see, normally upgrading to the next generation from a purely economic standpoint does not make sense anyway, it's still likely a great upgrade for the 7xx and 9xx generations.
     
    TheDantee, Papusan, hmscott and 2 others like this.
  12. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,848
    Trophy Points:
    681
    So, first is the 7nm Vega performance. That is expected to give around 35% increased performance. Depending on task, that does put it even with a 1080 Ti. But, it also won't be a consumer chip ever and may not be created in volume and is only in commercial to recoup some of the 7nm R&D. So still a nothing in terms of competition, but wanted to clarify the claimed performance does put it equal with the 1080 Ti, not struggling, allegedly.

    On Nvidia shrink for this, we are looking at 35% performance for TSMC process node. This is because 12nm is 16nm at TSMC, just refined process. So, if a straight die shrink occurs, without increasing the number of transistors, we could see the performance boost likely in increased frequency. If they keep the die size, thereby packing in more transistors on the package, we may see more performance from transistor count rather than frequency increase. Even with that, we are looking at potentially under a 50% performance increase, and 135% performance of 31 frames per second only amounts to moving it to 41 frames. If they sprinkle it in to get 60fps, then you move to 80fps, or you can sprinkle harder. Either way, I do not have faith in it at even 7nm, just that it starts being viable by then due to finally having something a bit more powerful to work with.

    Also, on keeping it the same size as now, it doesn't make sense with current yields, which also suggests to do such, it would be better to wait for EUV, which can reduce the quad and sextuplet patterning back down to dual patterning, greatly increasing the yields, potentially, and also adding a possible 15% from process efficiencies over DUV. This puts the earliest Nvidia 7nm in 2020, most likely. And that coincides with the rumor of a Navi 20 big die coming out that same year, likely on 7nm EUV, considering the time frame and TSMC adoption of EUV volume mfring next spring, if they hit their mark. This is with AMD abandoning GCN for super-SIMD. If the performance boost due to design changes, plus the performance boost from die shrink and process refinements happen, and AMD can execute (always needs thrown in there), then the question is whether AMD can catch up on main use performance, not on RT, because RT will still be niche at the frame rates and usage, although more ready than currently to get it out the door.

    It is a complex question on the GPU side where this will wind up, and whether this is a distraction, a power grab, or a genuine pivot. But, for the reasons above, I disagree with your assessment.

    Edit: And to explain what I believe Navi 10 performance to be, using similar metrics, you are looking at 35% going to 7nm DUV for next year, plus any boosts not being on GF 14/12nm, and some performance due to architectural differences, like super-SIMD and abandoning GCN, which left large parts often unused and was sub-optimal for current software implementations at times. Even though old info on leaks said it targeted the 1080 performance, but was a small/midrange chip to replace Polaris (both can be true), the 1070/Vega 56 is about 35% more performance and the 1070 Ti/Vega 64 is about 50% (1080 is about 56% more). So, it will likely fall in between the 2060 (which is like a 1070 in performance, most likely, with the deck slide) and the 2070 (1080). That, if the price is right, like around its replacement product of the RX580, could still receive a descent purchase place in the general community. Time will tell on that, though.
     
    Last edited: Aug 31, 2018
    hmscott likes this.
  13. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    That's a stock 1080ti which then typically has more overclocking in it and your thinking best case performance. That's with a process advantage 2.5 years late. If that's not struggling I don't know what is.
     
  14. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,848
    Trophy Points:
    681
    You mistake the company struggling on GPU design and the design struggling. Yes, I was comparing stock performance, which includes stock boost, on the 1080 Ti to stock performance on that card, but many that overclock their cards to bench don't run it full tilt all the time. So there is nuance missed in your statement.

    Further, my explanation of Navi performance shows AMD gave up competing at the high end. That isn't struggle, that is defeat, at minimum, until 2020 before trying to compete at the high end, which is 4-5 years of no competition at the high end. That is already built into my statement.

    Finally, and most importantly, it shows you do not disagree with my assessment on Nvidia and performance of 7nm, meaning you ceded the primary point I addressed.

    Sent from my SM-G900P using Tapatalk
     
  15. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    Factory overclocked cards do exist and they have had the same arch for how long now?

    So my main point is that even if AMD wanted to capitalise they cant in a short time period. Even if they eventually tried ray tracing will likely be 2nd gen on 7nm and likely usable at higher refresh rates at 1440p and by then it's too late.

    Vega will struggle vs a 1080ti for the same power and its likely still more expensive something people certainly care about.
     
  16. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,848
    Trophy Points:
    681
    For factory overclock, that doesn't mean much in the Pascal arch and you know it. Boost set by temp of core matters. Binning of chips matters. But the reported stock and boost clock isn't what it actually boosts to, making that point less poignant than if made on any generation before Pascal.

    Further, Vega 20 7nm is NOT a consumer card, will not likely ever be sold to consumers, and is likely going to be in small numbers to corporations that utilize its strengths, of which game processing is not (they say the card is AI focused, but no information in competition or benches going against tensor cores or Google's AI chips have been provided). So it isn't even proper to compare 7nm Vega to any of Nvidia's offerings except to say, potentially and on paper, the performance would match a stock 1080 Ti. Beyond that, AMD has no product competing against a 1080 Ti, instead only having the Vega 64 against a 1070 Ti to 1080. Because of this, it shows no competition for the 1080 Ti, the Titan Xp, the 2080, the 2080 Ti, and, of including the Titan V as a consumer card, that as well.

    As to aspersions on 7nm, what basis are you making your assumptions of 1440p 60 frames? Also, what they are doing for ray tracing isn't full, real ray tracing. What you are talking about is the adoption and integration of gameworks ray tracing and the use of DLSS to fake the data better. Am I correct in that being what you are saying, the adoption of Nvidia's implementation? If so, how do you get over the fact you are looking at only 35-50% gains for known process enhancements and how that extra will go toward using those elements more widely instead of resolution scaling? That is a disconnect in where we think this is heading, but I don't believe we disagree on process enhancements. So where is the extra horse power coming from to accomplish this feat?

    As to Vega, I already explained it isn't a consumer chip for Vega 20 and I've seen nothing saying it is coming to gaming. I explained Navi 10, which is and will use DUV 7nm from TSMC. I explained Navi 20, which is rumored to be a big die variant for consumers, will be EUV and rumours have, at the earliest, a 2020 release, which means no competition from AMD against the card mentioned above until 2020 at the earliest. There is a rumored Navi 14, but nothing is known about it, including release dates or expected time frames.

    Judging from your comments, I think you are misjudging my analysis of performance for trying to do an AMD vs Nvidia thing, suggesting AMD is in the fight. I'm not. I'm doing a pure analysis of what I believe performance of products are or will be on public information. I hope that clears up confusion on that matter if confusion existed.

    With that said, could you please explain to me the bases of your projected performance, because I'm not seeing how you get to your projection.

    Sent from my SM-G900P using Tapatalk
     
  17. Rahego

    Rahego Notebook Consultant

    Reputations:
    77
    Messages:
    267
    Likes Received:
    136
    Trophy Points:
    56
    [​IMG]

    more then 100 words - about RTX :)
     
    ajc9988 and hmscott like this.
  18. ole!!!

    ole!!! Notebook Prophet

    Reputations:
    2,879
    Messages:
    5,952
    Likes Received:
    3,982
    Trophy Points:
    431
  19. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,691
    Messages:
    29,824
    Likes Received:
    59,553
    Trophy Points:
    931
    I posted here... I'm sick of this Max-Q branding - moniker (Nvidia Thread) [​IMG]

    Of course it will come N versions. But all know Max-Q is Nvidias and the OEM manufactures (Ugly) baby. Too many thin and flimsy notebooks out there... And Nvidia know it. Many types thin notebook models means higher sales if they go for Max-Q in the beginning. A kick start for Nvidia's latest Scam.
     
  20. XMG

    XMG Company Representative

    Reputations:
    749
    Messages:
    1,754
    Likes Received:
    2,197
    Trophy Points:
    181
    Answer - nope.

    Plus, the article states this "This is great news for gamers who want something more portable than a tower because NVIDIA’s Max-Q designs are the closest you can ever get to desktop-grade performance in a laptop." which is utter nonsense.
     
    hmscott, bennyg, ole!!! and 1 other person like this.
  21. Fastidious Reader

    Fastidious Reader Notebook Evangelist

    Reputations:
    3
    Messages:
    365
    Likes Received:
    41
    Trophy Points:
    41
    Very true its a stupid claim since many good light laptops can fit a 1060 non max q version and have pretty good cooling. Some even handle the 1070 in a light chassis. Well lighter than a Desktop replacement anyway.
     
    hmscott likes this.
  22. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    Well the full fat ones are the closest but it's all about the form factor you are after, the statement just shows a lack of understanding on terms/tech.
     
    Vistar Shook likes this.
  23. Stooj

    Stooj Notebook Deity

    Reputations:
    187
    Messages:
    841
    Likes Received:
    664
    Trophy Points:
    106
    In other news for Mobile RTX GPUs, one thing brought to light about the Desktop card TDPs being increased is the inclusion of Type-C VirtualLink.
    Virtual-Link needs to be able to provide a minimum of 15W and optional of 27W which can almost solely explain the the increased card TDP in desktop use. In Laptops that power requirement would be supplied by the notebook power delivery directly, thus the TDP on the GPU core would be similar to Pascal.

    It'll be interesting to see how the TDP change shakes out. The shrink from 16/14nm -> 12nm isn't really enough to account for the efficiency improvement without other things going on, especially if the TDP includes taxing all 3 core types (FP32 + RT + Tensor).
     
    Vistar Shook likes this.
  24. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    That "minus" from the total TDP, which I heard as a full 30w, from 215w leaves 185w, which given the increase in CU's and other support hardware - not including the RT / Tensor cores?? - makes me wonder what duty cycle is being used to define the TDP, is that a maximum for the 50% of the die we are used to measuring only?

    What if the Tensor, RT, and CU's are running at full tilt, it's gotta be more than 185w => or 215w as advertised.

    IDK how Max-Q'ized the TC / RT 50% of the die can be made... without being useless.

    RTX2080 / RTX2070 100% die power draw will not fit in 90% of the laptops - and none of the popular thin laptops. Max-Q'ized they may fit in more laptops but be useless for raytracing.
     
    bennyg likes this.
  25. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    You'll have to see the desktop full breakdowns before you can even guess.
     
  26. bennyg

    bennyg Notebook Virtuoso

    Reputations:
    1,567
    Messages:
    2,370
    Likes Received:
    2,375
    Trophy Points:
    181
    Duty cycle is a good way to put it. What the card is capable of at operating voltage and 100% core load (aka. Power virus workload), and what the card is limited to by power limit, are two different things. With Turbo boost 3.0 the card can basically throttle itself back to wherever they want, to achieve whatever performance or marketing target nvidia desires...
     
  27. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    Not really, it's faster than before boost, it's all a gain. In terms of normal gaming the extra units would go dormant.
     
  28. RampantGorilla

    RampantGorilla Notebook Deity

    Reputations:
    72
    Messages:
    780
    Likes Received:
    313
    Trophy Points:
    76
    That's not how TDP works. The TDP of a chip is how much thermal energy per second the chip is producing. The power that the Virtual-Link draws has nothing to GPU's TDP and it doesn't affect it anyway. It's created by a separate buck converter which is connected to the power rails created by the PSU in a desktop or the charger in a laptop.
     
  29. yrekabakery

    yrekabakery Notebook Virtuoso

    Reputations:
    1,470
    Messages:
    3,438
    Likes Received:
    3,688
    Trophy Points:
    331
    Nvidia uses TBP, so it accounts for everything on the board--GPU core, memory, VRMs, LEDs, fans, Virtual Link, etc.
     
    Vistar Shook likes this.
  30. Papusan

    Papusan Jokebook's Sucks! Dont waste your $$$ on Filthy

    Reputations:
    42,691
    Messages:
    29,824
    Likes Received:
    59,553
    Trophy Points:
    931
    Don't mix up names or whatever you try on... TGP for MXM (entire board incl. vRAM etc). TDP Aka for BGA is only DIE.

    Edit. And if we talk about laptops with MXM design... Max TGP will most likely be vendor dependent same as before. For Pascal (around 200w). How high it will be this time, we will have to see when it come.

    On top GDDR6 is more power efficient. Aka the cores will be more powerful vs last gen (Pascal) <if we compare them with same design power - MXM graphics>.
     
    Last edited: Sep 5, 2018
    Vistar Shook likes this.
  31. Stooj

    Stooj Notebook Deity

    Reputations:
    187
    Messages:
    841
    Likes Received:
    664
    Trophy Points:
    106
    Yes, TDP is supposed to be about the heat a chip generates, but most of the time it is used to describe power usage.

    If you look at most of Nvidia's own white-papers, they almost universally use TDP as a power consumption metric rather than thermal energy. For example, when talking about Tesla cards their whitepapers refer to 50-60% TDP being the optimal performance point and specifically that they "consume 300W TDP", as opposed to "generate" it.
     
    Vistar Shook, bennyg and hmscott like this.
  32. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    GPU's go by whole board TDP vs CPU's that go by CPU / socket level TDP. So anything added to the GPU "board" that takes power will get counted in the GPU board level TDP.
     
    Last edited: Sep 5, 2018
  33. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    Nvidia is more aware as generally they come as cards which must come under a certain power limit so they control it at a higher level as a matter of course.
     
    Vistar Shook likes this.
  34. Fastidious Reader

    Fastidious Reader Notebook Evangelist

    Reputations:
    3
    Messages:
    365
    Likes Received:
    41
    Trophy Points:
    41
    So hearing the Intel 9000 series revision of Coffee Lake will not have Hyper Threading on any processor outside of their high end i9 series. So looking at i7s with 8 physical cores while the i9 will have 8/16 due to continued Hyper Threading.

    Would the physical core only models work better with the GTX setup or would it need those 8/16 HT cores to have enough power to keep up all that AI RT performance?
     
  35. GrandesBollas

    GrandesBollas Notebook Evangelist

    Reputations:
    370
    Messages:
    417
    Likes Received:
    563
    Trophy Points:
    106
    You really are asking whether the 9000 series will bottleneck the GPU. Multiple sources on the internet show that CPU physical core count and turbo clock are more important on performance than virtual cores if your primary task is gaming. If your primary tasking is parallel processing, hyper threading may be more appropriate.

    Here's one such source:

    https://forums.anandtech.com/threads/do-games-use-hyperthreading.2525322/
     
  36. Fastidious Reader

    Fastidious Reader Notebook Evangelist

    Reputations:
    3
    Messages:
    365
    Likes Received:
    41
    Trophy Points:
    41
    Well a bit more than that. The more higher clocks even with the removal of HT processing could end with higher power consumption and temps.

    So while not a Processing bottleneck we could be getting near just a thermal and power consumption limitation that will need better solutions for the continued progress of thinner laptop designs.
     
  37. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    In gaming ht hapless of an impact. Generally 8 cores will be better than 6 cores 12 threads.
     
    Vistar Shook likes this.
  38. Fastidious Reader

    Fastidious Reader Notebook Evangelist

    Reputations:
    3
    Messages:
    365
    Likes Received:
    41
    Trophy Points:
    41
    That is what I'm thinking they are getting at the i7 being their gaming enthusiast level and the i9 being for production type things.

    Maybe such changes will be beneficial towards the future since the 9000 series is set to be closely compatible with 8000 series with just needing a bios update. 8 core 9700k could show off the RTX 2080 quite well.
     
  39. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    The 8700k went hold back the rtx series either.
     
  40. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Intel Core i9-9900k 8c/16t, i7-9700K 8c/8t, i7-9600k 6c/6t 2nd Gen Coffee Lake CPU's + Z390
    http://forum.notebookreview.com/thr...0k-6c-6t-2nd-gen-coffee-lake-cpus-z390.811225
     
  41. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    The whole I number thing means even less now.
     
  42. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,201
    Messages:
    39,332
    Likes Received:
    70,613
    Trophy Points:
    931
    I think you are spot-on here and I am not even considering the new GPU line-up unless or until I can see evidence of it truly destroying 1080 Ti in benchmarks and games that are not tweaked for the sole purpose of favoring ray tracing measurements. The RTX 2080 Ti (or whatever they end up calling it) is severely overpriced, so it needs to bring performance gains across the board in equal or greater proportion to be worth buying. You totally nailed it on the GameWorks CrapWorks gimmick. That has always been a real joke as far as I am concerned.

    The jury is still out on this, but I think calling it a gimmick at this point seems like a pretty fair assessment. Until we have more evidence to the contrary, I think that is the only conclusion I can draw from everything I can see.

    Yeah, this is 100% true. In fact, they are an army of one. Any number times zero is still zero, and that is how much clout AMD has had in gaming and benching circles for a very long time. They haven't released anything on the graphics side of the house that was remarkable since first generation GCN technology, and that was both buggy and a short-lived run in the sun for them as a graphics performance leader. To that extent, NVIDIA's clout is gained by default.
     
    Last edited: Sep 12, 2018
    hmscott, Vistar Shook and Papusan like this.
  43. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    37,201
    Messages:
    39,332
    Likes Received:
    70,613
    Trophy Points:
    931
    Kind of like this...
    Nikita Khrushchev.jpg
     

    Attached Files:

    Vistar Shook and Papusan like this.
  44. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    The 7970m was great and then enduro shot it.
     
    Mr. Fox and Papusan like this.
  45. Support.3@XOTIC PC

    Support.3@XOTIC PC Company Representative

    Reputations:
    1,268
    Messages:
    7,186
    Likes Received:
    1,002
    Trophy Points:
    331

    I keep holding out hope that AMD will pocket enough money from dominating the console market to really push hard and come out ahead. But it keeps not happening.
     
    Mr. Fox likes this.
  46. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,848
    Trophy Points:
    681
    You need to change that to server market, need the requirement of NUMA support from independent software vendors (game designers) so that multi-die GPUs can happen, whether MCM or interposer based, and a minimum of 3+ years with the current environment.

    In other words, you need to wait until Nvidia puts out their own MCM multi-die card they are working on and forces NUMA adoption for graphics cards so that game developers adopt the standard.

    Even then, Navi is the first time leaving GCN for generations of cards and we have to see how good super-SIMD works. It has potential to increase parallel workloads and async compute over any current implementation. But that still means nothing if it cannot scale!

    Read my posts in this thread from weeks ago. I think I gave more depth on that. Read the comments from the AMD CTO saying programmers wouldn't adopt NUMA, so no go for multi-die GPUs for gaming, but that those work horses are coming for commercial (gamersN article, IIRC). This, and other articles, put a Navi 10 at 1080 speeds, puts it as a Polaris replacement (so mid-range card), and puts no monolith likely until 2020 with Navi 20, which coincides with volume EUV production at TSMC, which makes sense because to increase yields, EUV helps by reducing patterning, and that can reduce defect density essential for monolithic dies.

    So, if DLSS and gameworks RT are not really picked up by then, it is a matter of how well super-SIMD can do traditional gaming workloads, which we will find out 2H 2019. That doesn't mean the monolith will be competitive with Nvidia or not, just giving info and timelines, but after seeing the 2000 series, it is waiting for a true successor card on Nvidia side, which also will likely come in 2020 with 7nm EUV at TSMC (so neither company will really have a process advantage).

    This is why I say there is no GPU competition until 2020, but I revised my earlier stance that Nvidia drove away where AMD is no longer in the picture, just that they are not relevant at the high end of graphics for another couple years, minimum, if at all.

    Then you have Intel with a dGPU, at earliest, in 2020, which is a gutted Intel 10nm process going against 7nm EUV. Intel said they will not introduce their EUV until 2021, likely with 7nm, which is Samsung and TSMC 5nm or 3nm equivalent, which is why TSMC and GF, before GF got out of the race, were looking at skipping 5nm altogether as so little benefit on the node. So, Intel will being out 7nm about the time Samsung and TSMC reach 3nm with GAA, likely using horizontal nanosheets. That means Intel will not reclaim their supposed process lead now that it is lost. But, Intel has amazing engineers and can stand on their uarch, so should be fine, and could, if releasing on their 7nm EUV in 2021, disrupt the GPU market. Just have to wait and see.

    But, reason I say the server part above, AMD's old estimate from Q2 earnings was 4.5-5% of the server market. That was before the known Xeon shortage, HPE recommending AMD, testbeds finishing up Q3 and Q4, Intel's 14nm capacity shortage, which isn't higher demand but instead because Intel even thought they would have more being produced on 10nm by this point. They traditionally move the high margin, high performance parts to the new node, then pull in the older processes, like what they were using for chipsets, to the last node that has capacity due to moving those other components to the newest node, that way to decommission the oldest manufacturing processes and start getting ready for the next big process change. Here, because 10nm wasn't ready, there is a traffic jam on 14nm and their capacity is very constrained, causing hardware shortages. Intel did it to itself from not planning properly, while the antecedent of EUV not being ready is not Intel's fault and beyond Intel's control.

    But, Intel said that it's mission is holding AMD to 15-20% in the server market, which is estimated next year. Considering server market is so large, the windfall from going from less than a percent to 15-20% will be huge. That money can pay for a lot of R&D. Let's hope they use that correctly.

    Sent from my SM-G900P using Tapatalk
     
    Chastity and Support.3@XOTIC PC like this.
  47. Support.3@XOTIC PC

    Support.3@XOTIC PC Company Representative

    Reputations:
    1,268
    Messages:
    7,186
    Likes Received:
    1,002
    Trophy Points:
    331

    You've breathed life into my hope. AMD, don't drop the ball.
     
  48. bennyg

    bennyg Notebook Virtuoso

    Reputations:
    1,567
    Messages:
    2,370
    Likes Received:
    2,375
    Trophy Points:
    181
    And sorry to be devil's advocate and gutshot that hope but AMD has enough debt that there's still a motherload of money to be paid back before there's the freedomto spend R&D on speculative feet in doors which are currently shut to them... which is pretty much what graphics cards and laptops are right now
     
  49. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,848
    Trophy Points:
    681
    Stop speaking without a basis. The debt due 2018 that would have sunk them was refi'd. They have been paying down the most expensive debt and have increased R&D spending every quarter since sometime early in 2017, with significant jumps in the second and third quarter of 2017, and have continued that throughout 2018, something you would know if you actually watched their financial figures instead of repeating that tripe.

    So, yes, they have the freedom to spend all they want on R&D so long as they can pay their liabilities as they come due. Yes, 2022 or 2023 is around the next time we have to worry they might fold. That means we have 4-5 years of not worrying. Now, going to 5% in server market by end of year Q4, up from less than 0.5% at the start of the year is impressive, at a 10x income generation from the server market. That 10x could triple to quadruple next year according to Intel. That means AMD is looking at 30-40x what they previously got from the server market by some time next year. That, in conjunction with your lack of understanding of their finances, means they will have a windfall. Most suspect the bulk of R&D was being spent on 7nm and making sure that went fine. Considering AMD is sampling their Zen 2 at least one quarter early (Su announced sampling to early partners last quarter in the Q2 earnings call), it says that the early samples powered on and worked (which rarely happens) and were mostly, if not fully, functional. This was something management was very excited for at AMD, which you should be too considering right after that, Intel said 10nm is delayed to holiday 2019 for mainstream and server 2020. That means not only is AMD on track for 7nm CPU designs, they are ahead and just haven't moved their schedule to compensate. That is good for all of us.

    Then you have to add in the revenues from market share on mainstream and HEDT, which before last August, they had NO CPUs in the HEDT space, meaning the market share there is all gravy. Financially, the chance of AMD going under now is slim to non-existent, unlike pre-ryzen launch. Further, take an economics course to understand both how debt works AND about the tax benefits of leveraging payoffs of debt from profits, so long as the debt and interest rates are reasonable, in order to maximize profits. The debt/equity ratio can switch due to external factors, but has been used for a couple decades to game the system.

    So, would you like to return to why hopes shouldn't be high on AMD on the graphics side, because I have more information on that. For example, although super-SIMD was patented in 2016 time frame, that would put it at 3 years from patent to use for Navi 2019. Most outlets said Navi would still be a new GCN, with super-SIMD following in 2020-21. We know Navi 20 is the monolith in 2020, according to rumors. That means, potentially, one more gen of GCN, which would be horrible for consumers, IMO. If that is the case, then super-SIMD doesn't come until 2021 or 2022, which would likely be pushing it to the 3nm node, but where Nvidia will have the same process node to put out their new card, and that being two gens from 2000 cards for Nvidia, which if Navi gives zero competition due to little to no change (and remember, AMD cannot do NUMA based multi-die cards until Nvidia does, which will not happen until likely 2021 or 2022), then mindshare on graphics side goes down even further and you have to pray that Intel puts out anything good on the GPU side. Meanwhile, if instead Sony did require the departure from GCN, and Navi was made for Sony, then we get something fairly competitive in 2019, and 2020 was correct for super-SIMD, just the rumors Navi wasn't super-SIMD are wrong. At this point, when AMD will incorporate super-SIMD is what is up in the air, not if. So putting that in context, that is why I said 3+ years, not next year, not in 2020, 3+. Why? Because that is the soonest Nvidia forces ISVs to develop their games for NUMA. Sure, if AMD invested in making the software see the dies as a single unit, used interposers with ButterDounut topology, and with the driver having an in-built scheduler to help adapt to NUMA without software vendors dealing with it, they might be able to get it to work earlier, but that assumes a lot and I don't think they will put significant amounts into R&D to solve that issue, if being honest. https://www.pcgamesn.com/amd-navi-monolithic-gpu-design?tw=PCGN1

    Please try again and make sense when you do.
     
  50. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    Once they have a base they could look to get back in but that's coming from a fair way behind.
     
    ajc9988 likes this.
← Previous pageNext page →