The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
 Next page →

    Nvidia RTX 20 Turing GPU expectations

    Discussion in 'Sager and Clevo' started by Fastidious Reader, Aug 21, 2018.

  1. Fastidious Reader

    Fastidious Reader Notebook Evangelist

    Reputations:
    3
    Messages:
    365
    Likes Received:
    41
    Trophy Points:
    41
    So when in 2019 will the RTX 20 Clevo laptop GPUs be coming out possibly? Will the work on the Pascal series expedite the process?

    What kind of increase will we be looking at price wise if they are seen more as another tier above the Pascal 10s?

    Will it actually be another leap in performance like last time or will it be more in the capability department due to the Ray Tracing?

    Thoughts?

    Will laptops even benefit from such performance increases?
     
  2. bennyg

    bennyg Notebook Virtuoso

    Reputations:
    1,567
    Messages:
    2,370
    Likes Received:
    2,375
    Trophy Points:
    181
    Looks to me like modest performance gains will be had across the board from memory bandwidth and core count. Nvidia will not be stupid enough to release parts that are not definitively faster than last gen, and able to justify the across the board price increase (and at a time when used crypto parts will only get cheaper)

    Dx12 async compute apps will benefit heavily as it seems Nvidia have made an effort this time.

    Ray tracing is an added tech and how worthwhile in real implementations is yet to be seen (as well as its performance impact), but will not be universal as it'll for sure be a Gameworks feature.

    But Nvidia's history in getting technological innovations deployed is patchy so as always the risk is borne by the early adopters.
     
  3. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    A fair chunk of die area has gone to the ray tracing and AI components.
     
    hmscott likes this.
  4. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    From this photo of an Nvidia presentation slide, it appears as though 50% of the new die real estate has been split between AI (Tensor Core) and Ray-tracing (RT Core), leaving the Shader / Compute traditional functionality with about the same area as the previous generation.
    bigg.JPG Source

    It's hard to measure, but it looks like the RTX Shader and Compute section has less area to me in comparison than the previous Pascal die shown.

    Those Tensor Core and RT Core area's are wasted space for current games, and for me RTX is something I would disable in a game to get better performance, like I disable GameWorks Hair / etc now.
     
    Last edited: Aug 21, 2018
    Mr. Fox likes this.
  5. Fastidious Reader

    Fastidious Reader Notebook Evangelist

    Reputations:
    3
    Messages:
    365
    Likes Received:
    41
    Trophy Points:
    41
    So I wonder what the performance will be. Hearing those power needs I'm wondering if they'll even be able to have these in laptops. Clevo Desktop Replacements maybe but possibly not the laptop specs gamer models.

    Interesting. They've been saying that Ray Tracing is a lot more efficient than other methods. So maybe they'll have a good boost. Once games have Ray Tracing implemented that is. That in itself will be a bit till benchmark will be added
     
    hmscott likes this.
  6. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Ray tracing isn't replacing the current shader/compute model, otherwise there would be no shader/compute section in the new die...

    There seems to be additional performance in the traditional area too - Nvidia failed to demonstrate the improvements compared to previous generations.

    Look at all that die real estate taken up that could have been dedicated for real overall gaming performance...

    The RT features are add-on's, eye-candy, like the other Gameworks crap that slows down games. Except this time Nvidia added a hardware assist that is proprietary to their products.

    Nvidia is trying to lock in their lead by redefining the game, given AMD is always nipping at their heals and Intel is once again trying to get it together to put out their own discrete GPU.

    50%+ is a lot of die space to dedicate for eye candy that most of us will end up turning off to reduce power / heat generated by those areas of the die to improve gaming performance. :)

    Edit: I hope there is a way to completely disable / power off the Tensor Core and RT Core's when they aren't useful... that would be most of the time.
     
    Last edited: Aug 21, 2018
  7. Stooj

    Stooj Notebook Deity

    Reputations:
    187
    Messages:
    841
    Likes Received:
    664
    Trophy Points:
    106
    A few of my thoughts:
    1. I suspect we'll be seeing models/announcements late this year. Maybe December to get in with the Christmas timeline or January for "back-to-school/work" type stuff.
    2. The perf/watt change (most important to laptops) isn't terribly large. Maybe 15-20% if we're lucky. The fab change isn't as drastic as Maxwell -> Pascal was so don't expect any miracles. The jump to GDDR6 will also account for a significant portion of that boost.
    3. The largest unknown is the RT cores. We don't have benchmarks yet so it's hard to know how impactful they'll be. RT really only makes sense at the high end anyway.
    4. The TDPs are increased on the desktop cards, but the safe assumption is that is based on all cores being taxed (FP32 + Tensor + RT). So for most games only the FP32 portion should get hit hard so I suspect mobile cards will be able to squeeze into their current TDP brackets.
    5. Nobody knows quite yet if there will even be RT/Tensor cores in mid-low range models (X60 and X50/ti) which makes up the bulk of the market. Chances are they'll be standard FP32 setups and as such a straight upgrade over their Pascal predecessors.

    Most people forget that Ray-Tracing is actually implemented at the API level.

    DirectX and Vulkan will both support native Ray-Tracing calls. All Nvidia is doing is offloading those particular jobs to specialised cores to speed it up significantly. AMD will likely do the same thing. So it's not a lockout as previous Gameworks features which are implemented at an engine level.
    To be honest, assuming AMD isn't too far down the road for their "next-gen" design, Ray-Tracing is actually a good thing for them. AMD arch has always excelled at parallelisation and they can do exceptionally well if they can integrate ray-tracing operations into their existing compute units, which would allow much better allocation of resources and less wasted die space.

    The trick is, Nvidia is also pushing very hard for this to be the future of rendering. This is both a smart business move (if they push it before AMD then they have the next "killer" feature ahead of time which buys mind-share) and a good technological move (ray-tracing is the future and you can now scale 2 processor types instead of 1). That being said, if Ray-Tracing takes off too well, it also cuts off all older GPUs.

    Personally I'll probably end up with a 2080ti in my desktop rig, such as the price is. Currently on a 980Ti, so RT or not, I'll probably be doubling in GPU performance. Even so, it's primarily a VR rig and ray-tracing can be hugely beneficial in VR performance if used correctly. There's a reason why most VR games have piss-poor lighting. It's because most of the tricky lighting we do now either does not translate to simultaneous projection setups or is straight up broken.
     
    hmscott likes this.
  8. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,125
    Messages:
    11,571
    Likes Received:
    9,148
    Trophy Points:
    931
    ok so we are talking about REGULAR games here, which is gonna be the absolute majority by FAR for the foreseeable future. thus, no AI, no Tensor cores, no raytracing gimmicks supported.

    based on that the specs indicate a 25-30% performance increase for each of the three new cards. thats it. the regular, run of the mill 25% gen over gen increase weve seen for like...forever? :D

    soooo GAIZ! NOW is the time to go and get urselves 1080s and 1080 Ti cards for CHEAP! perfect example: 1080 Ti Asus Strix went from 870€ to 670€ in ONE FRIGGIN DAY on august 21st. and its still gonna be the second fastest card on the market directly beneath the 2080 Ti, the regular 2080 aint gonna beat it until games are supporting raytracing, tensor cores and AI on a broad basis. not gonna happen until the next or even second after next gpu gen is out.

    mark my words ;)
     
    ajc9988, KY_BULLET and hmscott like this.
  9. yrekabakery

    yrekabakery Notebook Virtuoso

    Reputations:
    1,470
    Messages:
    3,438
    Likes Received:
    3,688
    Trophy Points:
    331
    25-30% sounds overly optimistic. The 2080 only has 15% more CUDA cores than the 1080 and if anything looks like it clocks lower on the core.

    The 2070 is even worse, only 12.5% more CUDA cores than the 1070 notebook, at lower core clocks.
     
    KY_BULLET likes this.
  10. Fastidious Reader

    Fastidious Reader Notebook Evangelist

    Reputations:
    3
    Messages:
    365
    Likes Received:
    41
    Trophy Points:
    41
    Gotta Tuck in all that RT and AI stuff somewhere.

    Honestly that should have stayed with the Business Graphic Arts cards IMO least for the first generation.

    Putting all of that into these when it'll be another Generation or two before full implementation by Game Design Engines able to handle it is just gonna result in a bunch of half baked products.
     
    hmscott and KY_BULLET like this.
  11. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    Chicken and egg there I suppose, the artists are not going to bother if it's not out there to be used.
     
    Kigen and jaybee83 like this.
  12. Fastidious Reader

    Fastidious Reader Notebook Evangelist

    Reputations:
    3
    Messages:
    365
    Likes Received:
    41
    Trophy Points:
    41
    True guess we'll see where things go
     
  13. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    Indeed :) Nvidia do have more clout than ever in the of gaming space.
     
  14. Stooj

    Stooj Notebook Deity

    Reputations:
    187
    Messages:
    841
    Likes Received:
    664
    Trophy Points:
    106
    If the new leak about shader performance is anything to go on, things could be very much different:
    https://videocardz.com/77696/exclusive-nvidia-geforce-rtx-2080-ti-editors-day-leaks

    Either way, September 14th is the day we'll get benchmarks and we'll all know for sure.
     
    hmscott likes this.
  15. Fastidious Reader

    Fastidious Reader Notebook Evangelist

    Reputations:
    3
    Messages:
    365
    Likes Received:
    41
    Trophy Points:
    41
    Hearing some talk all of this is akin to Hairworks. Fancy stuff that's usually the first thing that you turn off to get better FPS. And with 120 and 144hz screens becoming widespread that is even more important nowadays.

    Will people go for that enhanced visual smoothness? Maybe just topping out at 1440p screens. At least for laptops that is.
     
    hmscott likes this.
  16. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Right now the demo's are under 60 fps @ 1080p ray-tracing enabled on the RTX2080ti... soooo, I don't think 1440p with ray-tracing is possible on this generation at usable FPS...?
     
  17. Fastidious Reader

    Fastidious Reader Notebook Evangelist

    Reputations:
    3
    Messages:
    365
    Likes Received:
    41
    Trophy Points:
    41
    That is what im saying. RT will be he first feature turned off to hit those 60fps plus numbers for gamers.

    Not to say it isn't innovative. Just on gaming laptops the market might not be there when speed is key.

    But as you note give it a couple more generations and then maybe. Like whatever they release in 2020.
     
    hmscott likes this.
  18. JasperLee93

    JasperLee93 Notebook Consultant

    Reputations:
    83
    Messages:
    134
    Likes Received:
    105
    Trophy Points:
    56
    After watching the Nvidia stream, I was kind of pretty disappointed about what to expect from RTX Cards. But the good news is, anyone who wishes to buy Pascal, the prices will most likely drop.

    The fact that Nvidia was focusing so much about the RTX component throughout the unveil, kind of goes to show how they want to market this new series of GPUs. They focus too much on RTX that they don't even show how much better Turing is in normal gaming performance, but rather RTX performance.

    It wasn't like when in 2016, when Nvidia unveiled Pascal, they showed off that a GTX 1080 can perform the same as 2 GTX 980s in SLI. The fact that they focused so much on how amazing RTX is and why every gamer should have it, made me really skeptical.

    Spec wise, while there seems to be maybe a 20-30% improvement, judging by the number of CUDA cores as compared to Pascal, I don't think it will be worth the extra dollars; we just all need to wait for benchmarks. If Pascal still sells when Turing is out, chances are it will still be the best GPU architecture to buy in overall performance per $$

    And a bad way to say this, because of lack of competition, Nvidia is kind of becoming like the Intel of GPUs; there isn't much competition even from AMD, so I don't think they will release anything soon that has significant improvements. Chances are, the next significant improvement is when AMD competes with Nvidia with Navi architecture (if Navi is stronger than Turing).

    The price of the RTX 2080 is the price of a 1080 Ti. Kind of ridiculous. I will suggest staying away from RTX since not many games support it. so you're pretty much paying for the early adopter tax.

    P.S. Nvidia saying Turing is 6X the performance of Pascal, I think its only the RTX portion. It sounds too good to be true for the 6X to be overall.
     
  19. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    There is still some "confusion" as to what we are getting on laptops this GPU generation:

    http://forum.notebookreview.com/threads/nvidia-thread.806608/page-54#post-10784625

    "Perhaps the most intriguing tidbit from the announcement is that the aforementioned Sky models will support future GPUs such as the "GTX 1180, GTX 1170, RTX 1080, RTX 1070" due to their versatile MXM 3 slots. Had Eurocom mentioned these GPUs before Gamescom, then we would have been quick to label them as placeholder names. However, the reseller is explicitly mentioning these GPU names almost two full days after the public reveal of the desktop RTX series in Cologne.

    It's possible that Nvidia will introduce a different naming convention yet again for its next generation of laptop GPUs. At best, the diverging names could simply be an attempt by the chipmaker to better distinguish between its laptop and desktop GPUs since current mobile Pascal GPUs have the exact same names as their desktop counterparts. At worse, however, we could be seeing a relatively minor refresh for mobile gamers."
    https://www.notebookcheck.net/Euroc...with-Core-i9-9900K-and-i7-9700K.324038.0.html
     
    Last edited: Aug 22, 2018
    jaybee83 likes this.
  20. Fastidious Reader

    Fastidious Reader Notebook Evangelist

    Reputations:
    3
    Messages:
    365
    Likes Received:
    41
    Trophy Points:
    41
    Well I've also heard of a possible GTX 2060 that will not have the Ray Tracing processing parts hence the continued GTX moniker.

    Wouldn't surprise me for laptops and desktops to diverge once again after Pascal with more functionality toward Desktop systems with the Ray Tracing abilities either cut down or eliminated from laptop designs.
     
    hmscott likes this.
  21. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    Yeah, but now they say there is an RTX1080 and RTX1070 for laptops...

    Check out the thread here for more info:

    http://forum.notebookreview.com/threads/nvidia-thread.806608/page-54#post-10784625
     
  22. Fastidious Reader

    Fastidious Reader Notebook Evangelist

    Reputations:
    3
    Messages:
    365
    Likes Received:
    41
    Trophy Points:
    41
    Well this whole card market is just growing more confusing.

    Will this 11s be a new mid range? Will some of those RTX be geared towards small hobbyists or freelance cgi customers?

    And making me wary about getting a laptop with a 1060 at the same time.
     
    hmscott likes this.
  23. Stooj

    Stooj Notebook Deity

    Reputations:
    187
    Messages:
    841
    Likes Received:
    664
    Trophy Points:
    106
    That's a very short-sighted way to look at it. Nvidia (and AMD) know they can't just scale FP32 performance to infinity so they're diversifying now. Same as AMD designed Ryzen to be built in CCX nodes because they know they have to build outward and not upward to beat Intel.

    Nvidia and AMD both need to find new ways to do gaming rendering and Nvidia is backing the Ray-Tracing horse heavily, which I suspect will be the right choice in the long run. Better to be the one creating the change than following it.

    I really don't understand why everyone get so hung up on this though. The 1080ti is very nearly a 4K/60hz capable card in current titles, assuming the 2080ti is conservatively 20-30% faster in standard raster performance, then that'll pretty much seal it as a 4K/60hz card or 1440p/120hz card etc.
    That seems to me like a very good point to begin branching performance out in other ways. As more things get done with AI (Tensor Cores) and RT (RT cores) then they can scale those out much larger. If (big IF) RT becomes the lighting model of choice in the next 2-3 years, increasing FP32 performance will do very little for performance in new games.

    We're starting to hit limitations of standard raster rendering. We've piled so many hacks onto it we're literally running out of ideas. Lets say everybody's dream came true and Nvidia could release a monster GPU that's literally twice as fast as the 1080ti, what would be the point?
    Turn up progressively more expensive Ambient Occlusion Effects? No need with RT. Increase shadow resolution to 16384? No need with RT. More dynamic light sources? No need with RT. We still can't even render mirrors in games properly! It's silly.

    It solves an amazing amount of problems. In the case of lighting/shadowing, it would actually take load off the FP32 cores which could then be spent on things like Tessellation and higher poly-counts. VR performance with RT could jump through the roof (since you don't have to flat-render and warp to the lens any more, wasting pixels).

    Not to mention, if we can get high ray-tracing performance and Tessellating models for everything, that takes a massive amount of load off the artists involved. No need to generate LODs anymore, no more pre-computing light maps, no more restrictions on what lights you can put where, no more weird hacks to have day-night cycles etc.
     
    Vistar Shook and hmscott like this.
  24. XMG

    XMG Company Representative

    Reputations:
    749
    Messages:
    1,754
    Likes Received:
    2,197
    Trophy Points:
    181
    Can anyone give me the source for this notebookcheck article, it doesn't seem clear who is referencing what and the information being reported simply doesn't exist physically at the moment?

    I was at the launch at Gamescom on Monday and the response, particularly when the pricing was announced, was extremely positive from the approximately 2,000 people that were at the event. The focus was on Ray Tracking performance as everyone knows, there were several real time examples of this in BF V, Shadow of the Tomb Raider and a couple more. But no specific performance figures in terms of FPS were announced and this won't happen for a couple of weeks. Even using the RTX GPUs in the multiplayer setups they had in the evening, it wasn't really possible to directly compare how they performed in comparison to the 10 series, but the expectation is very positive.

    Any reports of Turing GPUs available in laptops form factor MXM or BGA are purely speculation at the moment, especially for the so called GTX 11 series because the only GPUS that actually exist at the moment are the RTX 2070, 2080 and 2080Ti.

    Definitely confusing for the public, I'm definitely not suggesting that the discussions have no basis but I would strongly recommend that any information being thrown around by companies other than Nvidia is treated with more than a pinch of salt as the actual information simply doesn't exist at the moment for mobile solutions, nor does it physically ;-)
     
    hmscott likes this.
  25. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    Well at the moment 4k 120hz is the target for a single card.
     
  26. Fastidious Reader

    Fastidious Reader Notebook Evangelist

    Reputations:
    3
    Messages:
    365
    Likes Received:
    41
    Trophy Points:
    41
    Would it even makes sense to invest into one of the current gen mobile GPUs at this point with so much speculation going on?
     
  27. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    Depends if you need a machine or not. If you need one now then yes.
     
    hmscott likes this.
  28. Fastidious Reader

    Fastidious Reader Notebook Evangelist

    Reputations:
    3
    Messages:
    365
    Likes Received:
    41
    Trophy Points:
    41
    Overall its that I need something to fit within my budget of 2k. Waiting too long will just face me off behind laptops 6 to 7 hundred above my budget.
     
    hmscott likes this.
  29. Stooj

    Stooj Notebook Deity

    Reputations:
    187
    Messages:
    841
    Likes Received:
    664
    Trophy Points:
    106
    I'm not even sure the TU104 or TU102 would even physically fit on a MXM card. If you look at MXM cards now, they are already packed to the brim and the new Turing chips are huge. The only way I see a next gen MXM card is they release an "11 series" with the no RT cores or Tensor cores(which should keep the chip sizes very close to Pascal), or make even more non-standard size boards to compensate.

    For mobile, skipping RT/Tensor makes the most sense. From the looks of the new increased TDPs on the RTX cards (which basically shifts everything "up" a model, ie RTX2070 = GTX1080 power usage), it would not be practical to put them in laptops. Especially given the RTX2070 is the "entry" ray-tracing model and that is probably the biggest you could fit (from a power budget perspective).
     
    bennyg and hmscott like this.
  30. hmscott

    hmscott Notebook Nobel Laureate

    Reputations:
    7,110
    Messages:
    20,384
    Likes Received:
    25,139
    Trophy Points:
    931
    There are no certainties with these things, as release info for new products happen close to their release, and it's possible to purchase something and find out the next day it's been outdated by a new release, that's just how it goes.

    I always console those "stuck" like this that their new laptop is just as functional as it was when they bought it, when they started thinking about buying it, and will remain so for years.

    There was always going to be something better arriving on the market at some point, and that device is going to have new product issues that your new last gen laptop has already been through.

    Often the best time to buy a reliable laptop is near the end of the line for that model, when all the bugs are known and hopefully fixed, stability and performance are known, and there is a wealth of user / owner info posted that will help you get started quickly and without lingering issues - as long as there are no lingering issues for that model :)

    Buying the latest cutting edge, especially with something as new as RTX to the market, has so many unknowns that even with assurances about the knowns being beneficial and positive, we don't know enough about what you don't know about to speculate on potential problems.

    We should wait for real user / owner reviews, tuning and usage tips and kinks reports.

    That's why there is so much rampant speculation with RTX, everyone is trying to work out the new realities of the product and it's features by comparing it to what we do know. It's a process; one that is healthy and good to gain the perspective we need to use RTX, or choose not to use RTX and stay with "last gen" hardware. But, this will only get "real" when owners have hand's on experiences to report.

    I also always say to buy what you need now when you need it, and don't put it off until some unknown time in the future "when things might be better".

    If you are playing the "waiting game", you aren't gaming. :)
     
    Last edited: Aug 23, 2018
    Stooj likes this.
  31. Stooj

    Stooj Notebook Deity

    Reputations:
    187
    Messages:
    841
    Likes Received:
    664
    Trophy Points:
    106
    Indeed. The number of people (especially on Reddit) who come out with lines like "skipping this gen and waiting for RTX3000!" or "wait for 7nm in 2019" is mind-numbing. Weirdly, it tends to be people who already have a 1080ti o_O
    Everybody has known for 20 years that performance only increases by 20-30% or so and the top models are not priced for perf/dollar. Yet, every single time people are disappointed that performance hasn't literally doubled as that's the unrealistic bar they always set.

    I bought a 980ti on release and I knew then, that 'd be skipping the next generation (10 series) unless some massive new tech came out that I really needed. Just so happens that the 20 series is both the culmination of 2 generations of 20-30% increases in perf/watt AND introducing game-changing (pun intended) new tech at the same time.

    Indeed, you end up risking "missing the boat" entirely in many cases, especially if you buy into the mid-range. Game fatigue is a real thing and you run the risk of simply missing out on good games if you wait.
    As an example: I honestly feel bad for people when it comes to Witcher 3 because it's one of the greatest games of our time. But you had an awesome graphics card and got a great experience out of it, or you were struggling along on a mid-range card at 35fps. It's not a game you can replay all that often (if only due to the length and amount of content), so you really want to make sure you do it "properly" the first time. If you wait too long you'll end up with the Deus Ex 1 problem. Incredible game, but if you play it now (or even 10 years ago where it was still 7 years old) it's just janky and ugly which really detracts from the experience.

    From my perspective, if you keep waiting for the next gen, you run the risk of either:
    1. Always playing new games as they come out on medium settings all the time. Thus not really getting the most out of your gaming experiences.
    2. Have to wait 1-2 years to fully appreciate the games at high details. Hard to do with big titles and especially single-player ones where you also have to dodge spoilers and things like that.
    I've already got a massive back catalogue of games to play, I sure as hell don't want to have to wait 1-2 years every time just to play them at their best because "ain't nobody got time for that!". The great irony is I can afford to buy the top end cards now, but I don't have huge amounts of time to actually play games. So when I do get to gaming, I'll make damn sure the time is well spent with details up high.
     
    bennyg and hmscott like this.
  32. yrekabakery

    yrekabakery Notebook Virtuoso

    Reputations:
    1,470
    Messages:
    3,438
    Likes Received:
    3,688
    Trophy Points:
    331
    This is just flat-out wrong.
     
    D2 Ultima, ajc9988 and hmscott like this.
  33. Stooj

    Stooj Notebook Deity

    Reputations:
    187
    Messages:
    841
    Likes Received:
    664
    Trophy Points:
    106
    I probably should have specified as "increases by 20-30%, generation to generation, performance per watt".

    You get the odd outlier (usually due to big die shrink jumps like Maxwell->Pascal) but generally speaking that's the way it goes between generations. Keeping in mind that previously we would also have generation revisions which did very little in the way of perf/watt.
    e.g. The 680->780 has no perf/watt gain. They just released a bigger chip as the #80 card which used more power.
     
    hmscott likes this.
  34. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,848
    Trophy Points:
    681
    There is EVERY reason to skip this gen. Here are a couple distillations from the talks had recently:

    "Absolutely agree with what you said, except I do not expect the performance to be that large for the 2080 over the 1080 Ti, or the 2070 over the 1080. Since the name shuffle happened, it is appropriate to compare based on what the names would have been otherwise, in my opinion. Price isn't the issue, the question of timing of trying to force raytracing now through proprietary means as a way to further kick AMD when down (they have nothing coming and even Vega was "eh" when it came out, usually between 1070 and 1080, depending on game optimizations, when released while having way more power draw). Don't get me wrong, I am impressed with the tech Nvidia developed, without a doubt, but now, when it cannot yet hit 60 frames in 1080p with 7nm just around the corner which should pack 35% performance on the new process node and allowing packing more transistors into a single package, I think they really should have waited until the next generation or the one after to release the tech to consumers. I just think it was bad timing on their part.

    For tensor cores, my reaction is exactly the opposite. They should have given those to consumers sooner (a volta card). The reason I say this is they are doing DLSS super sampling with tensor cores. Since raytracing frame rates are so low still, early adopters will just turn those off when gaming other than maybe playing through the campaign. But, the tensor cores doing super sampling seems to extend the performance of the cards significantly over the generational 50% (so 30-50% claimed from 1080 to 2080). ( http://www.legitreviews.com/wp-content/uploads/2018/08/rtx2080-performance.jpg ). The problem with this is that it is still just when the games implement DLSS support, which means that this use is limited to newer titles and those still offering this type of support by the game developer of those titles.

    That brings me to another point: their implementation of limited asynchronous compute for floating and integer processing in parallel. This is awesome and something they really failed to bring to the table in the past (one of the reasons for not wanting to support DX12, which Nvidia wasn't great at and that MS, in part, based on mantle/vulkan/open source APIs). But, to utilize it, it needs to be done in programming the games, which means even though now present, we are not going to see that used in any game that has not already implemented this type of async compute. So, even this, like DLSS, although it should be applauded, will not help with current title performance, most likely.

    And that brings me back to the post that I did last video over comparing shader counts, memory bandwidths, etc., to the prior generation based on price. When stacking up the 1080 Ti to the 2080, the 1080 to the 2070, and the Titan Xp to the 2080 Ti, the only one with a clear performance boost seems to be the 2080 Ti. Now, as friends have pointed out to me, and other videos, Nvidia did rework their shader units. This rework could increase performance and so having a lower number of cuda cores does not necessarily mean that it will have worse performance. For that, we have to wait for reviews. That is a fair point and so, insofar as that goes, it is a wait and see game.

    We also talked about the implementation of the NVLink. Nvidia touted the fact it was 50x faster than SLI. That is great, but it is also 1/3 the speed of the full speed cables in enterprise (50up/50down instead of 150up/150down). Now this was likely done on pricing to make the cables more affordable and more mass producible for the gaming community. That, overall, is fine. And the speed increase will help in games where there are bandwidth limitations with current technologies. That being said, this is something that will need to be tested in at least two scenarios: 1) where on a mainstream build in 8x/8x config, the games are tested, and 2) on an HEDT rig with 16x/16x for the cards is tested (and possibly other configs on an HEDT rig). A good comparison might include the standard SLI, the LED and/or HB bridges, and then NVLink.

    Because of the raw stats comparing the 1080 Ti to the 2080 and 1080 to the 2070, my friends and I have thought that going from those to the price equivalent RTX cards are likely a side grade, unless using NVLink in a dual card setup. Any additional performance is not likely worth it to current gen Pascal owners. Whether the price is justified for people with similar setups, or just a 1080 Ti to 2080 Ti, it would be good to remind them what they are really doing is going from the prior gen to the current gen and up one step on the product stack. Whether they would have previously bought the Titan class is the real question (in other words, ignore the naming scheme and focus on performance per price).

    In summation, Nvidia should have instead left off the raytracing cores this gen, increased die size to a lesser degree, and given just more shaders and tensor cores while waiting for 7nm for raytracing introduction. Maybe introduce raytracing units on a special sku professional product this generation, much like Volta did with tensor cores, then bring it to consumers the following generation (maybe keeping a tensor core line and a raytracing line for professionals). I agree with changing the naming scheme, but doing it this way, you would get no complaints, would have given consumers exactly what they expected, and push off hearing the grumblings. Instead, they wanted to take the downtime for the sidegrade in performance to likely get their proprietary gameworks raytracing adopted to make it the standard so that if AMD came back with a card supporting raytracing at 7nm or with large increases in compute, they would have less competition with competing raytracing standards. This arguably is also why they didn't use this moment and their clout to force NUMA memory support for GPUs with independent software vendors, which would allow AMD to use multi-die GPUs to get back in the game, but that is a different discussion entirely. Hope this helps move the conversation along."

    Here is proper comparison data with what these cards are really replacing:
    "They just took out the Titan Xp line and made that the Ti. Not just on pricing, but on specs. You have an increase of 13% on cuda cores (FPU or shader units; 4352 vs 3840), and have a 12.5% increase on memory bandwidth versus the Titan Xp (616 vs 484). The pricing is also in line with the Titan Xp. This means the Titan Turing should be around $3,000 like the Titan V was. That means that the performance should be compared between the Titan Xp for the 2080 Ti, against the 1080 Ti for the 2080, and against the 1080 for the 2070.

    If that comparison is adopted, due to pricing, etc., and likely having a $3K Titan Turing, then there is the comparison of the 2080 with 2944 shader units versus the 1080 Ti with 3584 shader units (18% less shader units) and the 2080 having 448 GBps memory bandwidth versus the 1080 Ti's 484GBps (7.5% less memory bandwidth). For the 2070, there are 2304 shader units and 448GBps memory bandwidth, compared to the 1080's 2560 shader units and 320GBps memory bandwidth (a decrease in shader units of 10%, but an increase in memory bandwidth of 40%)."

    So, I really am not seeing your pushing on Nvidia doing something good this gen. IT MAKES NO SENSE! This is the comparison chart, but as UFD pointed out, they do not give reference.

    [​IMG]
     
    Vistar Shook, yrekabakery and hmscott like this.
  35. Stooj

    Stooj Notebook Deity

    Reputations:
    187
    Messages:
    841
    Likes Received:
    664
    Trophy Points:
    106
    I'm not saying that people should definitely jump on this gen. People on a high end Pascal (1080/1080ti) could probably get away with it if they're happy with the performance now. Generally speaking, skipping a generation every upgrade is probably a good idea, unless there's a big new feature (which RT may or may not be, time will tell). That being said, new games that release with RT will simply be unplayable (with RT on) on older hardware. If you want the shiny new feature, you literally need the new cards to do it.

    I don't know where you're getting that, but it's completely wrong.

    Real-time Ray-Tracing is implemented at the API level by DirectX and Vulkan (or Optix renderer). In March when it was announced, you needed 4x GV100 cores to run DX Ray-Tracing in realtime (24fps). You can run DXR on Pascal if you want, it's just really slow at it.
    This was announced back in March and AMD announced their own way to accelerate it at the same time. You can see it right here: https://github.com/GPUOpen-LibrariesAndSDKs/RadeonRays_SDK
    The difference, is Nvidia have gone and built specific hardware to accelerate ray-tracing a LOT which has sped up the entire time-line.

    Who knows how it affects AMD at this point, it'll depend on how far down the line they are committed to their developing architecture. That being said, generally speaking AMD has been very good at concurrent workloads and compute tasks. So AMD would probably benefit greatly from this when they release their own new cards with some form of RT acceleration. Both DirectX and Vulkan have specifically mentioned implementing Ray-Tracing as a compute shader function as that's exactly what it is, a compute workload.

    These are all just assumptions. Performance of RT has been demoed on A) games that aren't released yet, B) graphics cards that aren't released yet, C) drivers which aren't released yet. The Trifecta of unreleased alpha quality!

    Think about it, the Quadro RTX or GeForce RTX cards have only been in dev hands for maybe a few weeks now. Even devs who were approached in March to implement RT code wouldn't have even been able to test their code in a real scenario until a week ago. The fact the game demos didn't crash spectacularly is basically a miracle. This applies at both the engine level AND the game design level.

    As far as 7nm, I expect it'll make it into Telsa and Quadro cards first and have a significant wait time until consumer cards are built on it. The same pattern was followed for the last few generations.
    I'd wager, even if we get a 7nm core by Q4 next year, it'll be for a new V100 successor and not a Turing successor. A 7nm Geforce card may not arrive until 2020 and I daresay in 6 months people will get sick of waiting.

    It's a chicken/egg scenario. If they don't do it now, then developers will not implement it. Either way, it's always better be first with the big new tech, even if that means shooting a bit too early, or missing the mark entirely (looking at VXAO :p). Most people also forget that games that don't want to implement RT for lighting/shadows can use it for other non-visual purposes like sound and AI.

    Nvidia certainly have the clout to force the issue and if that's what it takes to get new tech into games then so be it. Devs will probably comply gratefully, because a full RT engine would mean significantly reduced workloads on artists. No more specular maps, no light-maps compile times, no fake lights in scenes, what a dream!

    To be fair, the reason for the bandwidth difference there are vastly different requirements in frame-buffer sizes to be transferred. Obviously a P100 card with 16GB will conceivably need at least double the bandwidth.

    As mentioned before, Nvidia cannot afford to leave RT for a later generation as it was already in the pipe-line at DirectX and Vulkan (admittedly with great pushing from Nvidia themselves).
     
    hmscott likes this.
  36. yrekabakery

    yrekabakery Notebook Virtuoso

    Reputations:
    1,470
    Messages:
    3,438
    Likes Received:
    3,688
    Trophy Points:
    331
    980 had a lower TDP than 680 despite a bigger die (because it was still 28nm) and performed almost twice as fast. 20-30% is not and has not been the holding pattern, neither has it been accompanied by a proportional (or greater) price increase, which is why people are up in arms over Turing and have every right to be.

    680 to 780 had no power efficiency gain because they're the same Kepler architecture. 700 series was a Kepler refresh aside from the 750/750 Ti.
     
    Last edited: Aug 24, 2018
    ajc9988 likes this.
  37. Stooj

    Stooj Notebook Deity

    Reputations:
    187
    Messages:
    841
    Likes Received:
    664
    Trophy Points:
    106
    Maxwell and Pascal were both huge architectural breakthroughs though. They pulled a theoretical 2x perf/watt out of the same 28nm process and added more SMs at the same time with Maxwell. Then followed up with a massive die shrink almost cutting in half.

    That's where you lose me though. NV don't really owe us anything. They had to make the ray-tracing divergence somewhere and may as well do it now or the same thing will happen later. Assuming that going smaller than 7nm will be hard, you're just run the risk of FP32 performance literally going backwards if you're stuck on 7nm for more than 1 generation. By implementing RT/Tensor and trying to offload gaming performance to those, you buy yourself some extra time by being able to scale those out.
     
  38. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,848
    Trophy Points:
    681
    Drink the coolaid if you like. I hear arsenic gives a nice sweet or almond flavor according to Google.

    You keep promoting these cards and using phrases like "probably get away with it" and "unless there's a big new feature" and "new games that release with RT will simply be unplayable (with RT on) on older hardware. If you want the shiny new feature, you literally need the new cards to do it." That is all persuasive language meant to entice or make another feel inferior. The way written, that "new games that release with RT will simply be unplayable" is forgetting that to many gamers 30 frames per second IS unplayable. I'll address that point shortly.

    But the point is, you are literally trying to sell the new implementation and cards without hard performance numbers, then trying to say anyone using logical analyses to get at performance or publicly available information is wrong to do so "because it might be different." Give me a break. No games will have RT at launch, the DLSS requires specific packages from Nvidia reliant on supercomputer AI algos per game, which will likely balloon the driver sizes and relies on Nvidia being willing to devote supercomputer time to support, which means that results will not just vary on how well the AI can optimize its algo for the fluff filling in of half rendered frames, but that performance could also decrease in the future if Nvidia doesn't give a **** (or can be a way to sandbag 2000 series cards to make you want to buy the new shiny 3000 cards, that way your comments apply again at 7nm).

    You are promoting it while trying to give yourself weasel room for saying you hedged. Don't play the games, say what you mean, while directly saying it might not happen the way described, and be ready to eat your words if it doesn't. That is what I try to do.

    It isn't completely wrong. You are just giving half truths, which are lies through omission. First, you are correct that DirectX and Vulkan are open and not proprietary to Nvidia. But, you ignore that what Nvidia is pushing to game devos is Gameworks! Do we remember how gameworks was used to sandbag BOTH Nvidia and AMD, but hit AMD harder? I guess you don't remember the Witcher 3 scandal. Or the hairworks being used way off in nowhere land on Final Fantasy XV giving Nvidia a larger lead and hitting AMD harder. Now, why MUST Nvidia do it now instead of waiting two years? This point will be addressed, in part, in the next section.

    But, there are other aspects of RT that do not smell right as well. Yes, what it can do in its version of a hybrid RT is impressive, but this isn't full raytracing. It is part raytraced, part rasterized overlay, and part deep learning fill in of the scene through the same tech that does denoising. It gives probably the closest possible representation of raytracing without actually being raytracing. But it IS NOT RAYTRACING. With so many moving parts, you have to rely on the driver and AI supercomputer code per game, trust the implementation of gameworks, etc.

    Then there is the part about Nvidia not being good at DX12 and async compute, but all of a sudden pushing DX12.5 with the RDX in it. They literally used their clout for the past generation or two (sometime around Maxwell and all of Pascal) to keep the support on DX11 and not moving forward. Hence, they do not act to drive things forward, only to drive their profits. What they have done with these new cards is increased its ability to do async, as seen through floating point calculations performed at the same time as integer operations, and added a new feature to the library, raytracing, while promoting the use of their proprietary gameworks library over the use of Vulkan, etc. Maybe you haven't seen the history of Nvidia, so here are a couple in depth videos to fill you in:









    Now, we can see what he got right and wrong in his analyses on the future (and at the time future). But, the points on how Nvidia operates are absolutes. Take that with the information on pushing gameworks raytracing and using hype to push sales without performance data and do with it what you will.

    WTF are you talking about here. If they had full fat Quadro versions to develop the games, then they had at least hardware that could implement it. You think those with games releasing in a month wouldn't review the AI implementation of DLSS and RT and go to a trade show without having some clue? That is why they hid the frame counter. Did you not read all the articles on the frame rates or watch the vids?
    http://www.pcgameshardware.de/Grafi...ormance-in-Shadow-of-the-Tomb-Raider-1263244/
    https://www.pcgamesn.com/nvidia-rtx-2080-ti-hands-on

    It is called sampling. You can sample products under NDA. Then you change the conversation from these cards, drivers, and implementation to professional commercial products on 7nm, trying to make it sound like the wait is sooooo looooong, so you should just buy these cards which do not show any real significant performance gains, especially after my explanation of the shifting of names in the stack. That explanation shows that the real comparison is on price point and that you don't compare a 1080 to a 2080, rather you compare the 1080 Ti to the 2080, and when you do that, you get mighty underwhelmed really ****ing quick. But you didn't address my analysis on that AT ALL, instead pointing over here and over there. You ignore the meat of my argument, instead picking around the edges. Well maybe you would prefer listening to someone with a bit more clout, like Jay, tell you RT performance isn't getting any better.

    Watch this video.

    You took part of his argument (drivers, games, and hardware), but that also doesn't mean that it will be better later. See, these games are not designed ground up to support it. That can effect performance, but there is just as high a likelihood that future implementations from the ground up cause heavier load because they were designed to fully use the tech instead of getting it halfway. That is easily as likely a scenario. Overall, unless you are doing SLI, like I mentioned, on a 2080 Ti, I'm betting the game is unplayable with raytracing. It will not give the frames needed, which makes it a gimmick. Even Jay said that 30FPS in 1080p is impressive, and that regular compute is probably phenomenal, but that does not mean you are going to get some huge jump later on. That is called pipe dreams. Respect it for what it is, what it does, and **** the hype! That means looking at how gamers will use it, and gamers won't be using raytracing.

    ********. You could have given ALL the tools now, and given pro hardware with this implemented now, so that games that come out in two years are built from the ground up supporting it and implementing it, with it having higher frame rates so that gamers would use it when the 7nm cards come out around 2020. Instead, they are trying to force it now because of competition and gameworks, as I said earlier, cutting out from under AMD's feet trying to work on the implementation of DX and Vulkan on their platform. It is a big game of strategy and you just are not seeing it.

    Then you move onto "lets use the card in ways not marketed to shine that turd." A turd is a turd is a turd. It may be impressive and shiny at what it does, but that doesn't make it any less of what it is.

    They did use their clout, refer back to my point about gameworks. Also, because they have to support both standards, because raytracing produces unplayable graphics at the moment, it doubled developers workloads, not lightened it, because they still must program games without it because the majority of the market won't have products that can utilize it. Talk about PR ******** spewing all over this response.

    I gave the fairness, it was to save money and bring down costs so gamers can purchase it and it doesn't interfere with their corporate retailing of full NVLinks which cost $800. They know and understand market segmentation. Do you?

    Sure they can. It is two more years. By releasing it on the commercial side, they still get the high margin recoup on the product, push implementation and development of DX and Vulkan, and have fully ground up game development. The difference is the adoption of gameworks raytracing and more competition from 7nm cards from AMD, which they fear to a degree that the windfall from market share in the server side of AMD may flow over into the graphics card side again, to a degree. It isn't like they don't think they will be on top, but with a lack of new products from practically their sole competitor, putting out something with equal performance, giving misleading marketing to consumers, shifting the names on the product stack, etc., and polishing a turd while also still able to unload the Pascal GPU glut they have from mismanaging the exit from the mining craze seems a lot more plausible than what you are spewing here. One generation, two years, isn't anything to wait on these standards. Ulterior motives for market monopoly are.

    Also, notice how I quoted your entire argument and addressed it piece by piece. This is so that people can see I'm addressing your full argument and not misrepresenting your statements. You should try it sometime.
     
  39. yrekabakery

    yrekabakery Notebook Virtuoso

    Reputations:
    1,470
    Messages:
    3,438
    Likes Received:
    3,688
    Trophy Points:
    331
    Go back another generation then. 680 vs. 560 Ti was another +100% gen-on-gen perf/W increase. Still not 20-30%.
     
    Stooj and ajc9988 like this.
  40. yrekabakery

    yrekabakery Notebook Virtuoso

    Reputations:
    1,470
    Messages:
    3,438
    Likes Received:
    3,688
    Trophy Points:
    331
    Oh and also, way to contradict yourself.

    Someone how I totally missed that because I was so distracted by your "Nvidia doesn't really owe us anything" part, AKA the most inane BS excuse for anti-consumer practices.
     
    ajc9988 likes this.
  41. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,848
    Trophy Points:
    681
    I'm only responding to this second part of the response. Nvidia doesn't owe us anything. But as consumers, we don't owe them anything either. It is called an arms length transaction. That means if they do not offer us anything of worth that we agree with on the value, they don't sell ****! Imagine that. They need us to keep their money rolling in. We say no because they polished a turd, and they have to then eat that turd in write downs and write offs. That is how capitalism works. Funny you miss that part of the equation, assuming we should be happy with whatever pile of crap they throw at us. Where have I seen that before? Oh, right, APPLE. It is like Nvidia telling us we are holding it wrong. **** that noise.

    Then, if you want to talk about risk while doing multiple gens on the same node, how many gens were on the long in the tooth 28nm node? The only risk is that they develop a ****tier architecture than the last one. But, what they did this time, instead of doing one on DUV 7nm, followed by one on EUV 7nm, which is supposed to be like a 15% increase do to process changes and not using quad patterning, etc., they gave us this turd on 12nm, then are going to EUV 7nm, while starting with the highest margin products to recoup design costs, because that is business. But, that doesn't mean we should accept their shuffling of names on the stack, the pricing per name, and that what they are giving us is worth what they claim it to be. So, if they are so scared they can't design a GPU on the same node anymore, maybe they are in the wrong business.
     
    Vistar Shook, yrekabakery and hmscott like this.
  42. Stooj

    Stooj Notebook Deity

    Reputations:
    187
    Messages:
    841
    Likes Received:
    664
    Trophy Points:
    106
    I'm not trying to sell anything. I've said before that I think RT is the way forward. I've also said that 1080ti users probably don't need to upgrade. I've got no horse in this race other than I have a 980Ti and I'm looking to upgrade. You don't need to take things so personally.

    What I'm suggesting is that people shouldn't go the opposite direction and start getting all worked up without that same data. RT is new territory, better to approach it with the mindset of trying to make it work rather than dismiss it immediately.

    It is wrong. You say that they're forcing ray-tracing through a proprietary means. That's literally the opposite of what they're doing.
    What they're doing is pushing people to use DXR and Vulkan Ray-tracing because they know AMD cannot compete there. But unlike other Gameworks effects that doesn't prevent AMD's own implementation from working. Hell, AMD might build an even faster implementation for all we know.

    DXR code has been available since March. The tutorial code is here: https://github.com/NVIDIAGameWorks/DxrTutorials

    However, it didn't run anywhere near real-time on a single cards except for very simple scenes. Even the best Quadro of the time (GV100) was not enough as a single unit. That's what the RT cores are designed to speed up. What that means, is any optimisations you could make to your DXR implementation in relation to RT specs is all theoretical. It would be like implementing code for an ASIC which won't arrive for 6 months, but all you have is a spec sheet and no actual ASIC to test on. You're never going to really know how things run until you get the hardware.

    So what is the solution then? I said before, FP32 won't scale forever and eventually we have to address the issues that raster rendering has.

    The common suggestion is lets defer RT technology until 7nm so you can build in more performance in 12-24 months. At what point is the performance "acceptable" to begin putting in RT?
    Lets assume RT is as horrible as you think it is and we get 1080p@30fps in SoTR. Lets assume they double the RT performance to 1080p@60fps on 7nm in 12-24months. People will STILL complain about that because it's "only 1080p". So at 4K you're looking at 20-30fps assuming it scales linearly with resolution (20 raw, 30 with DLSS and you render at 66% of 4K).

    That 100% increase in performance only applies to 12-24 month old games, not NEW games for that time. Exactly what kind of magic rabbit do you want Nvidia to pull out of their hats here?


    Look. I get it....
    Everyone wants the 2080ti to go twice as fast as the 1080ti and would be happy if it cost $700. Lots of people don't give a toss about Ray-Tracing, they think it's a waste of time. Some people would like their games that run at 4K@40fps now to just run at 4K@80fps instead.
    But at some point the ride is going to stop on increasing performance and sometimes you just have to change the way things are done. Ray-tracing does that and it's an admirable goal.

    The price increases suck and maybe people think it's a dud generation (maybe it will be). But things have to change some time.
     
    Vistar Shook and hmscott like this.
  43. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,848
    Trophy Points:
    681
     
    hmscott likes this.
  44. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,848
    Trophy Points:
    681
    Stop comparing the 2080 Ti to the 1080 Ti, it is 2080 Ti to Titan Xp, and it is 2080 vs the 1080 Ti. That is what the pricing says, so that is what you compare, price points. We aren't doing like core count and price per performance like on CPUs. It has ALWAYS been price points for comparison. With comparing AMD to Nvidia, you compare flagships, then compare price points. That is why the Vega 64 was tried to be compared to Ti, then compared almost exclusively against the equivalently price 1080. So, here, you can try to polish that generational turd that you compare it on the xx80 or Ti monikers, but at the end of the day, it is where it is in the stack. Nvidia should have seen this coming from a mile away. Instead, what did they do? They steered into it because consumers let them in the past. Well, I'm guessing they met the breaking point on that.

    As one person put it to me:
    "If Nvidia renamed the 2080ti to a Titan people would be more accepting of the price. A Titan XP sells for £1,150 vs £1,100 for the 2080ti.
    The cuda difference from 2080 to 2080ti is 47%.
    GTX 1000 series it was 40%.
    GTX 900 series it was 37%.
    GTX 700 series it was 25%.
    (The difference vs titan was 50%, 50% and 25% respectively)

    This is the biggest performance difference, in raw cuda, that has ever existed between a ti model and a none ti - pointing to it being a titan. Do you have any thoughts on why they didn't just name this a titan? It seems like they've shot themselves in the foot - advertising a gaming card, that is actually a prosumer card, with a prosumer price."

    Would you like to try again?
     
  45. LunaP

    LunaP Dame Ningen

    Reputations:
    946
    Messages:
    999
    Likes Received:
    1,102
    Trophy Points:
    156
    Gotta remember though , the problem w/ ignorance is that it picks up confidence as it goes along, and when people start playing captain murica w/ a shield of ignorance, it just gets messy.

    You can lead a horse to water, but there's no reason to pull their head up if they refuse to come up for air.


    Anyways, BS aside, noone's looking for twice as fast, (yes it'd be great but) given the current leg room even 30% would be great. I'm hoping it turns out to be somewhat driver related but its turning out to be a pretty sad year since tech's all in the middle of swapping gears.
     
  46. XMG

    XMG Company Representative

    Reputations:
    749
    Messages:
    1,754
    Likes Received:
    2,197
    Trophy Points:
    181
    Your question has mostly been answered overnight. The fact is that even those of us who have a decent understanding of what will come in the mobile form factor don't have by any means all the information to give stable advice and of course we also can't due to NDA. The implimentation discussion over the last couple of pages for 2*** series (or whatever codename people want to use for mobile) is hitting all the correct points.

    If you need a laptop now then now is the time to buy. If you need a laptop in 6 months then you should have a launch and at least a little mature product to gague if it's right for you or not. It's an impossible question to answer if you just need to know if you would buy a 1080 now and then regret it later, but you know what's available now, how they run in current games, what their pricing and availability is and os on.
     
    hmscott likes this.
  47. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,426
    Messages:
    58,171
    Likes Received:
    17,882
    Trophy Points:
    931
    Without even having the desktop cards benched fully, without inside information you can't even guess at the moment. Perhaps save the final judgement for the benchmarks.
     
  48. Stooj

    Stooj Notebook Deity

    Reputations:
    187
    Messages:
    841
    Likes Received:
    664
    Trophy Points:
    106
    For that specific quoted example, I was positing what would happen if the 2080Ti was priced like previous Ti models, as that would be the "ideal world" scenario. Even if the 2080Ti released for $699 I'm sure people would still find reasons to complain about the (on paper) relatively small increase in performance.

    For price brackets now, you're correct because everything is shifted "up" a model right from the start. Previously the 1080 released at 980Ti pricing then by the time the 1080Ti released everything was shuffled back.

    Given the Titan tends to be the fully unlocked "big chip", I suspect they may release a Titan T to replace the Titan V at $3000.

    Nvidia's pattern has always been to release "the fastest single GPU". It's possible the immediate release of the 2080ti is due to the Titan V already holding that crown and the regular 2080 can't beat it reliably.
    Leaving room to release a fully unlocked chip also means they can keep releasing the "fastest" GPU a couple of times.

    It wouldn't do anything for your average gamer, but it would still reinforce their mind-share by simply having the "fastest" GPU, regardless of cost.
     
  49. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,848
    Trophy Points:
    681
    The titan Xp was a GP102. The titan V is a cut down GV100. You can no longer use those metrics. This is literally shifting it up and likely getting rid of the super powered card sliding between the Titan and the xx80 series. That hurts consumers. And people have complained about Nvidia's gouging. But when you properly compare on price points, you see that the performance of a 2080, how the cards will be used, likely is not that much of an improvement, if not a backslide, compared to the 1080 Ti, with DLSS being the only thing that may save it. Just like the Adobe implementation on HW acceleration, which is snake oil and the images come out worse than a true render on photos, we have to evaluate the DLSS the same way. https://www.pugetsystems.com/labs/a...on-in-Adobe-Media-Encoder---Good-or-Bad-1211/ . There is a chance of the AI making mistakes from the algo being wrong or the upscaling not looking as good as the truly rendered 4K content.

    Same point for the 2070 compared to the 1080. By the time we are done, this gen looks like a turd with a gimmick attached that cannot be used for gaming unless you are satisfied with sub-60 frames in 1080p. It's ability to even reach that IS impressive, but that doesn't make it ready for prime time or something consumers should buy AT ALL. I praise what they accomplished, then I redicule what they are doing on price and how the cards will be used by consumers because that is where it matters for consumers. Casual gamers may care less, but it can be argued if you are casual, you shouldn't necessarily be spending your money on enthusiast products and that you would have an equally pleasurable experience on something with lower power, thereby freeing up cash for other things you may want.

    Now, if Nvidia gave guidance that they will put out a card in between the 2080 and Ti, cutting down the Ti but giving a boost in performance that sits between the two cards, but only slightly more expensive than the 2080 (or shift down the cost of the 2080 and slide in that card at like $50 or $100 over intro price of the 2080), then consumers would gripe about the price, but would chill out. But, with the name shifting, I think people doubt that and think this is to just get more margins on the Ti, while trying to force that huge premium. That is a legit concern for consumers.

    But, either way, if the 1080 Ti costs what a 2080 does, or is within the same ballpark, people will make their decision on that basis of which of those products will give the best bang for the buck. If DLSS has the same issues as the HW acceleration on Adobe, consumers will reject it, or at least some will. And as one video showed from UFD, the performance of the 1080 Ti in the games selected, with settings at high or max, already do 4K@60. So the benefit of buying the new over the old is lessened in a significant way.

    One person mentioned the timing of exiting the crypto mining bubble wrong, that a AIB partner returned 300,000 chips to Nvidia, and that Nvidia could be sitting on a stock of Pascal right now. Doing other than a side grade would cause further depreciation, but they did a large write down prediction for Q3, and already wrote down some in Q2. So, instead of pushing forward with a new card with just tensors and higher cuda count (floating point units), doing a side grade and introducing a new tech, while pushing for adoption of their proprietary raytracing library in gameworks raytracing, and while having zero competition from AMD for the foreseeable future, while pricing it high enough to still clear that old inventory, makes a lot of business sense to me.

    Also, you are GREATLY misconstruing the dies used. Titan Xp being 102, not 100, and the 1080 Ti being a 102, not a 100, shows that going Titan V as a 100 cut down is an extreme departure. They left no room, and the Titan Turing cut down TU100 chip will be a $3K card. What people liked about the fuller GP102 being the Titan Xp is that the slightly cut down xx80 Ti series would sandwich between that fuller 102 chip and the 104 xx80 chip, while being optimized for gaming performance and delivering about 30%, roughly, over the xx80 series. It is what they got used to.

    Now, I understand that because the Titan is now the cutdown 100 chip, which is a much larger die and thereby has larger costs, as well as being the full Turing and the later Quadro flagship, you would want to charge $3K, about 1/3 the price. It makes it a larger costing halo product. Better margins and fewer wasted 100 dies in production. That is fine. But, you are screwing with consumer expectations on the 102 line by doing so. So, the 2080 Ti now sitting in the stack where the Titan used to for 102 dies, while waiting for defect level dies to stack to be able to release the cut down, people question whether such a cut down product will even exist. And that has the market nervous that they are cutting that product out, getting rid of defective 102s that used to make the Ti series (or, if yields were good enough, purposely gimped dies, which is fine), while taking the margins on the cut down 100 dies instead and pushing the costs of the stack up for everyone.

    Also, consumers are not so stupid to throw money at something just because it is labeled the "fastest." (some are, but with wages going down, trade wars going on, etc., it doesn't make sense to the masses of consumers to waste money in that manner, especially since most would wait the 4-9 months to get the cut down 102 die Ti products to say such thing, not buy the Titan series, and when they see, by and large, that the 2080 Ti is in the Titan Xp spot, you will see lower sales).

    But continue trying to obfuscate, people can look up what I've said for themselves. Also, as you said, it wouldn't do anything for gamers. So, you already see and know what is going on here. You sound like either a dyed in the wool fanboy or a PR rep for Nvidia.
     
  50. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,750
    Messages:
    6,121
    Likes Received:
    8,848
    Trophy Points:
    681
    You can use publicly known information, like dies used, to draw reasonable conclusions on what should be compared, etc. You can use price points to show what will be compared, reasonably. You can use the lack of transparency on settings and performance information, which they used to give a lot more of, as a sign of trying to obfuscate consumers having all the info necessary for a purchase decision. You can take the mass amount of reviewers and tech sites telling people do not buy until reviews are posted as an indication to wait and see what is really going on, and generally best practices. None of that suggests you cannot guess at the moment.

    Further, you can compare the cuda count and memory bandwidth. Those should properly be given caveats, like the claimed shader revision. All of that is reasonable. What this does is to set the stage against what reviewers will be testing the hardware, so that they can meet their consumers' expectations in their reviews, not just Nvidia's expectations of their coverage. They are content creators and anyone that consumes their content on reviews are who they must keep coming back. Nvidia might threaten to not give review units. We've seen that before. Between their and AMD's actions, that is largely why Gamers Nexus no longer does those types of reviews with staged embargoes acting as adverts for products.

    So, obviously, the final judgment is on the numbers, but that does not mean these discussion are not worth while, including shading and managing consumer expectations.
     
 Next page →