So when in 2019 will the RTX 20 Clevo laptop GPUs be coming out possibly? Will the work on the Pascal series expedite the process?
What kind of increase will we be looking at price wise if they are seen more as another tier above the Pascal 10s?
Will it actually be another leap in performance like last time or will it be more in the capability department due to the Ray Tracing?
Thoughts?
Will laptops even benefit from such performance increases?
-
Fastidious Reader Notebook Evangelist
-
Looks to me like modest performance gains will be had across the board from memory bandwidth and core count. Nvidia will not be stupid enough to release parts that are not definitively faster than last gen, and able to justify the across the board price increase (and at a time when used crypto parts will only get cheaper)
Dx12 async compute apps will benefit heavily as it seems Nvidia have made an effort this time.
Ray tracing is an added tech and how worthwhile in real implementations is yet to be seen (as well as its performance impact), but will not be universal as it'll for sure be a Gameworks feature.
But Nvidia's history in getting technological innovations deployed is patchy so as always the risk is borne by the early adopters. -
Meaker@Sager Company Representative
A fair chunk of die area has gone to the ray tracing and AI components.
hmscott likes this. -
Source
It's hard to measure, but it looks like the RTX Shader and Compute section has less area to me in comparison than the previous Pascal die shown.
Those Tensor Core and RT Core area's are wasted space for current games, and for me RTX is something I would disable in a game to get better performance, like I disable GameWorks Hair / etc now.Last edited: Aug 21, 2018Mr. Fox likes this. -
Fastidious Reader Notebook Evangelist
hmscott likes this. -
There seems to be additional performance in the traditional area too - Nvidia failed to demonstrate the improvements compared to previous generations.
Look at all that die real estate taken up that could have been dedicated for real overall gaming performance...
The RT features are add-on's, eye-candy, like the other Gameworks crap that slows down games. Except this time Nvidia added a hardware assist that is proprietary to their products.
Nvidia is trying to lock in their lead by redefining the game, given AMD is always nipping at their heals and Intel is once again trying to get it together to put out their own discrete GPU.
50%+ is a lot of die space to dedicate for eye candy that most of us will end up turning off to reduce power / heat generated by those areas of the die to improve gaming performance.
Edit: I hope there is a way to completely disable / power off the Tensor Core and RT Core's when they aren't useful... that would be most of the time.Last edited: Aug 21, 2018 -
A few of my thoughts:
- I suspect we'll be seeing models/announcements late this year. Maybe December to get in with the Christmas timeline or January for "back-to-school/work" type stuff.
- The perf/watt change (most important to laptops) isn't terribly large. Maybe 15-20% if we're lucky. The fab change isn't as drastic as Maxwell -> Pascal was so don't expect any miracles. The jump to GDDR6 will also account for a significant portion of that boost.
- The largest unknown is the RT cores. We don't have benchmarks yet so it's hard to know how impactful they'll be. RT really only makes sense at the high end anyway.
- The TDPs are increased on the desktop cards, but the safe assumption is that is based on all cores being taxed (FP32 + Tensor + RT). So for most games only the FP32 portion should get hit hard so I suspect mobile cards will be able to squeeze into their current TDP brackets.
- Nobody knows quite yet if there will even be RT/Tensor cores in mid-low range models (X60 and X50/ti) which makes up the bulk of the market. Chances are they'll be standard FP32 setups and as such a straight upgrade over their Pascal predecessors.
DirectX and Vulkan will both support native Ray-Tracing calls. All Nvidia is doing is offloading those particular jobs to specialised cores to speed it up significantly. AMD will likely do the same thing. So it's not a lockout as previous Gameworks features which are implemented at an engine level.
To be honest, assuming AMD isn't too far down the road for their "next-gen" design, Ray-Tracing is actually a good thing for them. AMD arch has always excelled at parallelisation and they can do exceptionally well if they can integrate ray-tracing operations into their existing compute units, which would allow much better allocation of resources and less wasted die space.
The trick is, Nvidia is also pushing very hard for this to be the future of rendering. This is both a smart business move (if they push it before AMD then they have the next "killer" feature ahead of time which buys mind-share) and a good technological move (ray-tracing is the future and you can now scale 2 processor types instead of 1). That being said, if Ray-Tracing takes off too well, it also cuts off all older GPUs.
Personally I'll probably end up with a 2080ti in my desktop rig, such as the price is. Currently on a 980Ti, so RT or not, I'll probably be doubling in GPU performance. Even so, it's primarily a VR rig and ray-tracing can be hugely beneficial in VR performance if used correctly. There's a reason why most VR games have piss-poor lighting. It's because most of the tricky lighting we do now either does not translate to simultaneous projection setups or is straight up broken.hmscott likes this. -
ok so we are talking about REGULAR games here, which is gonna be the absolute majority by FAR for the foreseeable future. thus, no AI, no Tensor cores, no raytracing gimmicks supported.
based on that the specs indicate a 25-30% performance increase for each of the three new cards. thats it. the regular, run of the mill 25% gen over gen increase weve seen for like...forever?
soooo GAIZ! NOW is the time to go and get urselves 1080s and 1080 Ti cards for CHEAP! perfect example: 1080 Ti Asus Strix went from 870€ to 670€ in ONE FRIGGIN DAY on august 21st. and its still gonna be the second fastest card on the market directly beneath the 2080 Ti, the regular 2080 aint gonna beat it until games are supporting raytracing, tensor cores and AI on a broad basis. not gonna happen until the next or even second after next gpu gen is out.
mark my words -
yrekabakery Notebook Virtuoso
The 2070 is even worse, only 12.5% more CUDA cores than the 1070 notebook, at lower core clocks.KY_BULLET likes this. -
Fastidious Reader Notebook Evangelist
Honestly that should have stayed with the Business Graphic Arts cards IMO least for the first generation.
Putting all of that into these when it'll be another Generation or two before full implementation by Game Design Engines able to handle it is just gonna result in a bunch of half baked products. -
Meaker@Sager Company Representative
Chicken and egg there I suppose, the artists are not going to bother if it's not out there to be used.
-
Fastidious Reader Notebook Evangelist
-
Meaker@Sager Company Representative
Indeed
Nvidia do have more clout than ever in the of gaming space.
-
https://videocardz.com/77696/exclusive-nvidia-geforce-rtx-2080-ti-editors-day-leaks
Either way, September 14th is the day we'll get benchmarks and we'll all know for sure.hmscott likes this. -
Fastidious Reader Notebook Evangelist
Hearing some talk all of this is akin to Hairworks. Fancy stuff that's usually the first thing that you turn off to get better FPS. And with 120 and 144hz screens becoming widespread that is even more important nowadays.
Will people go for that enhanced visual smoothness? Maybe just topping out at 1440p screens. At least for laptops that is.hmscott likes this. -
-
Fastidious Reader Notebook Evangelist
Not to say it isn't innovative. Just on gaming laptops the market might not be there when speed is key.
But as you note give it a couple more generations and then maybe. Like whatever they release in 2020.hmscott likes this. -
After watching the Nvidia stream, I was kind of pretty disappointed about what to expect from RTX Cards. But the good news is, anyone who wishes to buy Pascal, the prices will most likely drop.
The fact that Nvidia was focusing so much about the RTX component throughout the unveil, kind of goes to show how they want to market this new series of GPUs. They focus too much on RTX that they don't even show how much better Turing is in normal gaming performance, but rather RTX performance.
It wasn't like when in 2016, when Nvidia unveiled Pascal, they showed off that a GTX 1080 can perform the same as 2 GTX 980s in SLI. The fact that they focused so much on how amazing RTX is and why every gamer should have it, made me really skeptical.
Spec wise, while there seems to be maybe a 20-30% improvement, judging by the number of CUDA cores as compared to Pascal, I don't think it will be worth the extra dollars; we just all need to wait for benchmarks. If Pascal still sells when Turing is out, chances are it will still be the best GPU architecture to buy in overall performance per $$
And a bad way to say this, because of lack of competition, Nvidia is kind of becoming like the Intel of GPUs; there isn't much competition even from AMD, so I don't think they will release anything soon that has significant improvements. Chances are, the next significant improvement is when AMD competes with Nvidia with Navi architecture (if Navi is stronger than Turing).
The price of the RTX 2080 is the price of a 1080 Ti. Kind of ridiculous. I will suggest staying away from RTX since not many games support it. so you're pretty much paying for the early adopter tax.
P.S. Nvidia saying Turing is 6X the performance of Pascal, I think its only the RTX portion. It sounds too good to be true for the 6X to be overall. -
http://forum.notebookreview.com/threads/nvidia-thread.806608/page-54#post-10784625
"Perhaps the most intriguing tidbit from the announcement is that the aforementioned Sky models will support future GPUs such as the "GTX 1180, GTX 1170, RTX 1080, RTX 1070" due to their versatile MXM 3 slots. Had Eurocom mentioned these GPUs before Gamescom, then we would have been quick to label them as placeholder names. However, the reseller is explicitly mentioning these GPU names almost two full days after the public reveal of the desktop RTX series in Cologne.
It's possible that Nvidia will introduce a different naming convention yet again for its next generation of laptop GPUs. At best, the diverging names could simply be an attempt by the chipmaker to better distinguish between its laptop and desktop GPUs since current mobile Pascal GPUs have the exact same names as their desktop counterparts. At worse, however, we could be seeing a relatively minor refresh for mobile gamers."
https://www.notebookcheck.net/Euroc...with-Core-i9-9900K-and-i7-9700K.324038.0.htmlLast edited: Aug 22, 2018jaybee83 likes this. -
Fastidious Reader Notebook Evangelist
Wouldn't surprise me for laptops and desktops to diverge once again after Pascal with more functionality toward Desktop systems with the Ray Tracing abilities either cut down or eliminated from laptop designs.hmscott likes this. -
Check out the thread here for more info:
http://forum.notebookreview.com/threads/nvidia-thread.806608/page-54#post-10784625 -
Fastidious Reader Notebook Evangelist
Will this 11s be a new mid range? Will some of those RTX be geared towards small hobbyists or freelance cgi customers?
And making me wary about getting a laptop with a 1060 at the same time.hmscott likes this. -
Nvidia and AMD both need to find new ways to do gaming rendering and Nvidia is backing the Ray-Tracing horse heavily, which I suspect will be the right choice in the long run. Better to be the one creating the change than following it.
That seems to me like a very good point to begin branching performance out in other ways. As more things get done with AI (Tensor Cores) and RT (RT cores) then they can scale those out much larger. If (big IF) RT becomes the lighting model of choice in the next 2-3 years, increasing FP32 performance will do very little for performance in new games.
We're starting to hit limitations of standard raster rendering. We've piled so many hacks onto it we're literally running out of ideas. Lets say everybody's dream came true and Nvidia could release a monster GPU that's literally twice as fast as the 1080ti, what would be the point?
Turn up progressively more expensive Ambient Occlusion Effects? No need with RT. Increase shadow resolution to 16384? No need with RT. More dynamic light sources? No need with RT. We still can't even render mirrors in games properly! It's silly.
It solves an amazing amount of problems. In the case of lighting/shadowing, it would actually take load off the FP32 cores which could then be spent on things like Tessellation and higher poly-counts. VR performance with RT could jump through the roof (since you don't have to flat-render and warp to the lens any more, wasting pixels).
Not to mention, if we can get high ray-tracing performance and Tessellating models for everything, that takes a massive amount of load off the artists involved. No need to generate LODs anymore, no more pre-computing light maps, no more restrictions on what lights you can put where, no more weird hacks to have day-night cycles etc.Vistar Shook and hmscott like this. -
I was at the launch at Gamescom on Monday and the response, particularly when the pricing was announced, was extremely positive from the approximately 2,000 people that were at the event. The focus was on Ray Tracking performance as everyone knows, there were several real time examples of this in BF V, Shadow of the Tomb Raider and a couple more. But no specific performance figures in terms of FPS were announced and this won't happen for a couple of weeks. Even using the RTX GPUs in the multiplayer setups they had in the evening, it wasn't really possible to directly compare how they performed in comparison to the 10 series, but the expectation is very positive.
Any reports of Turing GPUs available in laptops form factor MXM or BGA are purely speculation at the moment, especially for the so called GTX 11 series because the only GPUS that actually exist at the moment are the RTX 2070, 2080 and 2080Ti.
Definitely confusing for the public, I'm definitely not suggesting that the discussions have no basis but I would strongly recommend that any information being thrown around by companies other than Nvidia is treated with more than a pinch of salt as the actual information simply doesn't exist at the moment for mobile solutions, nor does it physically ;-)hmscott likes this. -
Meaker@Sager Company Representative
-
Fastidious Reader Notebook Evangelist
-
Meaker@Sager Company Representative
Depends if you need a machine or not. If you need one now then yes.
hmscott likes this. -
Fastidious Reader Notebook Evangelist
hmscott likes this. -
For mobile, skipping RT/Tensor makes the most sense. From the looks of the new increased TDPs on the RTX cards (which basically shifts everything "up" a model, ie RTX2070 = GTX1080 power usage), it would not be practical to put them in laptops. Especially given the RTX2070 is the "entry" ray-tracing model and that is probably the biggest you could fit (from a power budget perspective). -
I always console those "stuck" like this that their new laptop is just as functional as it was when they bought it, when they started thinking about buying it, and will remain so for years.
There was always going to be something better arriving on the market at some point, and that device is going to have new product issues that your new last gen laptop has already been through.
Often the best time to buy a reliable laptop is near the end of the line for that model, when all the bugs are known and hopefully fixed, stability and performance are known, and there is a wealth of user / owner info posted that will help you get started quickly and without lingering issues - as long as there are no lingering issues for that model
Buying the latest cutting edge, especially with something as new as RTX to the market, has so many unknowns that even with assurances about the knowns being beneficial and positive, we don't know enough about what you don't know about to speculate on potential problems.
We should wait for real user / owner reviews, tuning and usage tips and kinks reports.
That's why there is so much rampant speculation with RTX, everyone is trying to work out the new realities of the product and it's features by comparing it to what we do know. It's a process; one that is healthy and good to gain the perspective we need to use RTX, or choose not to use RTX and stay with "last gen" hardware. But, this will only get "real" when owners have hand's on experiences to report.
I also always say to buy what you need now when you need it, and don't put it off until some unknown time in the future "when things might be better".
If you are playing the "waiting game", you aren't gaming.Last edited: Aug 23, 2018Stooj likes this. -
Everybody has known for 20 years that performance only increases by 20-30% or so and the top models are not priced for perf/dollar. Yet, every single time people are disappointed that performance hasn't literally doubled as that's the unrealistic bar they always set.
I bought a 980ti on release and I knew then, that 'd be skipping the next generation (10 series) unless some massive new tech came out that I really needed. Just so happens that the 20 series is both the culmination of 2 generations of 20-30% increases in perf/watt AND introducing game-changing (pun intended) new tech at the same time.
As an example: I honestly feel bad for people when it comes to Witcher 3 because it's one of the greatest games of our time. But you had an awesome graphics card and got a great experience out of it, or you were struggling along on a mid-range card at 35fps. It's not a game you can replay all that often (if only due to the length and amount of content), so you really want to make sure you do it "properly" the first time. If you wait too long you'll end up with the Deus Ex 1 problem. Incredible game, but if you play it now (or even 10 years ago where it was still 7 years old) it's just janky and ugly which really detracts from the experience.
From my perspective, if you keep waiting for the next gen, you run the risk of either:
- Always playing new games as they come out on medium settings all the time. Thus not really getting the most out of your gaming experiences.
- Have to wait 1-2 years to fully appreciate the games at high details. Hard to do with big titles and especially single-player ones where you also have to dodge spoilers and things like that.
-
yrekabakery Notebook Virtuoso
-
You get the odd outlier (usually due to big die shrink jumps like Maxwell->Pascal) but generally speaking that's the way it goes between generations. Keeping in mind that previously we would also have generation revisions which did very little in the way of perf/watt.
e.g. The 680->780 has no perf/watt gain. They just released a bigger chip as the #80 card which used more power.hmscott likes this. -
"Absolutely agree with what you said, except I do not expect the performance to be that large for the 2080 over the 1080 Ti, or the 2070 over the 1080. Since the name shuffle happened, it is appropriate to compare based on what the names would have been otherwise, in my opinion. Price isn't the issue, the question of timing of trying to force raytracing now through proprietary means as a way to further kick AMD when down (they have nothing coming and even Vega was "eh" when it came out, usually between 1070 and 1080, depending on game optimizations, when released while having way more power draw). Don't get me wrong, I am impressed with the tech Nvidia developed, without a doubt, but now, when it cannot yet hit 60 frames in 1080p with 7nm just around the corner which should pack 35% performance on the new process node and allowing packing more transistors into a single package, I think they really should have waited until the next generation or the one after to release the tech to consumers. I just think it was bad timing on their part.
For tensor cores, my reaction is exactly the opposite. They should have given those to consumers sooner (a volta card). The reason I say this is they are doing DLSS super sampling with tensor cores. Since raytracing frame rates are so low still, early adopters will just turn those off when gaming other than maybe playing through the campaign. But, the tensor cores doing super sampling seems to extend the performance of the cards significantly over the generational 50% (so 30-50% claimed from 1080 to 2080). ( http://www.legitreviews.com/wp-content/uploads/2018/08/rtx2080-performance.jpg ). The problem with this is that it is still just when the games implement DLSS support, which means that this use is limited to newer titles and those still offering this type of support by the game developer of those titles.
That brings me to another point: their implementation of limited asynchronous compute for floating and integer processing in parallel. This is awesome and something they really failed to bring to the table in the past (one of the reasons for not wanting to support DX12, which Nvidia wasn't great at and that MS, in part, based on mantle/vulkan/open source APIs). But, to utilize it, it needs to be done in programming the games, which means even though now present, we are not going to see that used in any game that has not already implemented this type of async compute. So, even this, like DLSS, although it should be applauded, will not help with current title performance, most likely.
And that brings me back to the post that I did last video over comparing shader counts, memory bandwidths, etc., to the prior generation based on price. When stacking up the 1080 Ti to the 2080, the 1080 to the 2070, and the Titan Xp to the 2080 Ti, the only one with a clear performance boost seems to be the 2080 Ti. Now, as friends have pointed out to me, and other videos, Nvidia did rework their shader units. This rework could increase performance and so having a lower number of cuda cores does not necessarily mean that it will have worse performance. For that, we have to wait for reviews. That is a fair point and so, insofar as that goes, it is a wait and see game.
We also talked about the implementation of the NVLink. Nvidia touted the fact it was 50x faster than SLI. That is great, but it is also 1/3 the speed of the full speed cables in enterprise (50up/50down instead of 150up/150down). Now this was likely done on pricing to make the cables more affordable and more mass producible for the gaming community. That, overall, is fine. And the speed increase will help in games where there are bandwidth limitations with current technologies. That being said, this is something that will need to be tested in at least two scenarios: 1) where on a mainstream build in 8x/8x config, the games are tested, and 2) on an HEDT rig with 16x/16x for the cards is tested (and possibly other configs on an HEDT rig). A good comparison might include the standard SLI, the LED and/or HB bridges, and then NVLink.
Because of the raw stats comparing the 1080 Ti to the 2080 and 1080 to the 2070, my friends and I have thought that going from those to the price equivalent RTX cards are likely a side grade, unless using NVLink in a dual card setup. Any additional performance is not likely worth it to current gen Pascal owners. Whether the price is justified for people with similar setups, or just a 1080 Ti to 2080 Ti, it would be good to remind them what they are really doing is going from the prior gen to the current gen and up one step on the product stack. Whether they would have previously bought the Titan class is the real question (in other words, ignore the naming scheme and focus on performance per price).
In summation, Nvidia should have instead left off the raytracing cores this gen, increased die size to a lesser degree, and given just more shaders and tensor cores while waiting for 7nm for raytracing introduction. Maybe introduce raytracing units on a special sku professional product this generation, much like Volta did with tensor cores, then bring it to consumers the following generation (maybe keeping a tensor core line and a raytracing line for professionals). I agree with changing the naming scheme, but doing it this way, you would get no complaints, would have given consumers exactly what they expected, and push off hearing the grumblings. Instead, they wanted to take the downtime for the sidegrade in performance to likely get their proprietary gameworks raytracing adopted to make it the standard so that if AMD came back with a card supporting raytracing at 7nm or with large increases in compute, they would have less competition with competing raytracing standards. This arguably is also why they didn't use this moment and their clout to force NUMA memory support for GPUs with independent software vendors, which would allow AMD to use multi-die GPUs to get back in the game, but that is a different discussion entirely. Hope this helps move the conversation along."
Here is proper comparison data with what these cards are really replacing:
"They just took out the Titan Xp line and made that the Ti. Not just on pricing, but on specs. You have an increase of 13% on cuda cores (FPU or shader units; 4352 vs 3840), and have a 12.5% increase on memory bandwidth versus the Titan Xp (616 vs 484). The pricing is also in line with the Titan Xp. This means the Titan Turing should be around $3,000 like the Titan V was. That means that the performance should be compared between the Titan Xp for the 2080 Ti, against the 1080 Ti for the 2080, and against the 1080 for the 2070.
If that comparison is adopted, due to pricing, etc., and likely having a $3K Titan Turing, then there is the comparison of the 2080 with 2944 shader units versus the 1080 Ti with 3584 shader units (18% less shader units) and the 2080 having 448 GBps memory bandwidth versus the 1080 Ti's 484GBps (7.5% less memory bandwidth). For the 2070, there are 2304 shader units and 448GBps memory bandwidth, compared to the 1080's 2560 shader units and 320GBps memory bandwidth (a decrease in shader units of 10%, but an increase in memory bandwidth of 40%)."
So, I really am not seeing your pushing on Nvidia doing something good this gen. IT MAKES NO SENSE! This is the comparison chart, but as UFD pointed out, they do not give reference.
Vistar Shook, yrekabakery and hmscott like this. -
Real-time Ray-Tracing is implemented at the API level by DirectX and Vulkan (or Optix renderer). In March when it was announced, you needed 4x GV100 cores to run DX Ray-Tracing in realtime (24fps). You can run DXR on Pascal if you want, it's just really slow at it.
This was announced back in March and AMD announced their own way to accelerate it at the same time. You can see it right here: https://github.com/GPUOpen-LibrariesAndSDKs/RadeonRays_SDK
The difference, is Nvidia have gone and built specific hardware to accelerate ray-tracing a LOT which has sped up the entire time-line.
Who knows how it affects AMD at this point, it'll depend on how far down the line they are committed to their developing architecture. That being said, generally speaking AMD has been very good at concurrent workloads and compute tasks. So AMD would probably benefit greatly from this when they release their own new cards with some form of RT acceleration. Both DirectX and Vulkan have specifically mentioned implementing Ray-Tracing as a compute shader function as that's exactly what it is, a compute workload.
Think about it, the Quadro RTX or GeForce RTX cards have only been in dev hands for maybe a few weeks now. Even devs who were approached in March to implement RT code wouldn't have even been able to test their code in a real scenario until a week ago. The fact the game demos didn't crash spectacularly is basically a miracle. This applies at both the engine level AND the game design level.
As far as 7nm, I expect it'll make it into Telsa and Quadro cards first and have a significant wait time until consumer cards are built on it. The same pattern was followed for the last few generations.
I'd wager, even if we get a 7nm core by Q4 next year, it'll be for a new V100 successor and not a Turing successor. A 7nm Geforce card may not arrive until 2020 and I daresay in 6 months people will get sick of waiting.
). Most people also forget that games that don't want to implement RT for lighting/shadows can use it for other non-visual purposes like sound and AI.
Nvidia certainly have the clout to force the issue and if that's what it takes to get new tech into games then so be it. Devs will probably comply gratefully, because a full RT engine would mean significantly reduced workloads on artists. No more specular maps, no light-maps compile times, no fake lights in scenes, what a dream!
hmscott likes this. -
yrekabakery Notebook Virtuoso
680 to 780 had no power efficiency gain because they're the same Kepler architecture. 700 series was a Kepler refresh aside from the 750/750 Ti.Last edited: Aug 24, 2018ajc9988 likes this. -
-
You keep promoting these cards and using phrases like "probably get away with it" and "unless there's a big new feature" and "new games that release with RT will simply be unplayable (with RT on) on older hardware. If you want the shiny new feature, you literally need the new cards to do it." That is all persuasive language meant to entice or make another feel inferior. The way written, that "new games that release with RT will simply be unplayable" is forgetting that to many gamers 30 frames per second IS unplayable. I'll address that point shortly.
But the point is, you are literally trying to sell the new implementation and cards without hard performance numbers, then trying to say anyone using logical analyses to get at performance or publicly available information is wrong to do so "because it might be different." Give me a break. No games will have RT at launch, the DLSS requires specific packages from Nvidia reliant on supercomputer AI algos per game, which will likely balloon the driver sizes and relies on Nvidia being willing to devote supercomputer time to support, which means that results will not just vary on how well the AI can optimize its algo for the fluff filling in of half rendered frames, but that performance could also decrease in the future if Nvidia doesn't give a **** (or can be a way to sandbag 2000 series cards to make you want to buy the new shiny 3000 cards, that way your comments apply again at 7nm).
You are promoting it while trying to give yourself weasel room for saying you hedged. Don't play the games, say what you mean, while directly saying it might not happen the way described, and be ready to eat your words if it doesn't. That is what I try to do.
But, there are other aspects of RT that do not smell right as well. Yes, what it can do in its version of a hybrid RT is impressive, but this isn't full raytracing. It is part raytraced, part rasterized overlay, and part deep learning fill in of the scene through the same tech that does denoising. It gives probably the closest possible representation of raytracing without actually being raytracing. But it IS NOT RAYTRACING. With so many moving parts, you have to rely on the driver and AI supercomputer code per game, trust the implementation of gameworks, etc.
Then there is the part about Nvidia not being good at DX12 and async compute, but all of a sudden pushing DX12.5 with the RDX in it. They literally used their clout for the past generation or two (sometime around Maxwell and all of Pascal) to keep the support on DX11 and not moving forward. Hence, they do not act to drive things forward, only to drive their profits. What they have done with these new cards is increased its ability to do async, as seen through floating point calculations performed at the same time as integer operations, and added a new feature to the library, raytracing, while promoting the use of their proprietary gameworks library over the use of Vulkan, etc. Maybe you haven't seen the history of Nvidia, so here are a couple in depth videos to fill you in:
Now, we can see what he got right and wrong in his analyses on the future (and at the time future). But, the points on how Nvidia operates are absolutes. Take that with the information on pushing gameworks raytracing and using hype to push sales without performance data and do with it what you will.
http://www.pcgameshardware.de/Grafi...ormance-in-Shadow-of-the-Tomb-Raider-1263244/
https://www.pcgamesn.com/nvidia-rtx-2080-ti-hands-on
It is called sampling. You can sample products under NDA. Then you change the conversation from these cards, drivers, and implementation to professional commercial products on 7nm, trying to make it sound like the wait is sooooo looooong, so you should just buy these cards which do not show any real significant performance gains, especially after my explanation of the shifting of names in the stack. That explanation shows that the real comparison is on price point and that you don't compare a 1080 to a 2080, rather you compare the 1080 Ti to the 2080, and when you do that, you get mighty underwhelmed really ****ing quick. But you didn't address my analysis on that AT ALL, instead pointing over here and over there. You ignore the meat of my argument, instead picking around the edges. Well maybe you would prefer listening to someone with a bit more clout, like Jay, tell you RT performance isn't getting any better.
Watch this video.
You took part of his argument (drivers, games, and hardware), but that also doesn't mean that it will be better later. See, these games are not designed ground up to support it. That can effect performance, but there is just as high a likelihood that future implementations from the ground up cause heavier load because they were designed to fully use the tech instead of getting it halfway. That is easily as likely a scenario. Overall, unless you are doing SLI, like I mentioned, on a 2080 Ti, I'm betting the game is unplayable with raytracing. It will not give the frames needed, which makes it a gimmick. Even Jay said that 30FPS in 1080p is impressive, and that regular compute is probably phenomenal, but that does not mean you are going to get some huge jump later on. That is called pipe dreams. Respect it for what it is, what it does, and **** the hype! That means looking at how gamers will use it, and gamers won't be using raytracing.
Then you move onto "lets use the card in ways not marketed to shine that turd." A turd is a turd is a turd. It may be impressive and shiny at what it does, but that doesn't make it any less of what it is.
They did use their clout, refer back to my point about gameworks. Also, because they have to support both standards, because raytracing produces unplayable graphics at the moment, it doubled developers workloads, not lightened it, because they still must program games without it because the majority of the market won't have products that can utilize it. Talk about PR ******** spewing all over this response.
Also, notice how I quoted your entire argument and addressed it piece by piece. This is so that people can see I'm addressing your full argument and not misrepresenting your statements. You should try it sometime.Falkentyne, Stooj, yrekabakery and 1 other person like this. -
yrekabakery Notebook Virtuoso
-
yrekabakery Notebook Virtuoso
Oh and also, way to contradict yourself.
ajc9988 likes this. -
Then, if you want to talk about risk while doing multiple gens on the same node, how many gens were on the long in the tooth 28nm node? The only risk is that they develop a ****tier architecture than the last one. But, what they did this time, instead of doing one on DUV 7nm, followed by one on EUV 7nm, which is supposed to be like a 15% increase do to process changes and not using quad patterning, etc., they gave us this turd on 12nm, then are going to EUV 7nm, while starting with the highest margin products to recoup design costs, because that is business. But, that doesn't mean we should accept their shuffling of names on the stack, the pricing per name, and that what they are giving us is worth what they claim it to be. So, if they are so scared they can't design a GPU on the same node anymore, maybe they are in the wrong business.Vistar Shook, yrekabakery and hmscott like this. -
What they're doing is pushing people to use DXR and Vulkan Ray-tracing because they know AMD cannot compete there. But unlike other Gameworks effects that doesn't prevent AMD's own implementation from working. Hell, AMD might build an even faster implementation for all we know.
However, it didn't run anywhere near real-time on a single cards except for very simple scenes. Even the best Quadro of the time (GV100) was not enough as a single unit. That's what the RT cores are designed to speed up. What that means, is any optimisations you could make to your DXR implementation in relation to RT specs is all theoretical. It would be like implementing code for an ASIC which won't arrive for 6 months, but all you have is a spec sheet and no actual ASIC to test on. You're never going to really know how things run until you get the hardware.
The common suggestion is lets defer RT technology until 7nm so you can build in more performance in 12-24 months. At what point is the performance "acceptable" to begin putting in RT?
Lets assume RT is as horrible as you think it is and we get 1080p@30fps in SoTR. Lets assume they double the RT performance to 1080p@60fps on 7nm in 12-24months. People will STILL complain about that because it's "only 1080p". So at 4K you're looking at 20-30fps assuming it scales linearly with resolution (20 raw, 30 with DLSS and you render at 66% of 4K).
That 100% increase in performance only applies to 12-24 month old games, not NEW games for that time. Exactly what kind of magic rabbit do you want Nvidia to pull out of their hats here?
Look. I get it....
Everyone wants the 2080ti to go twice as fast as the 1080ti and would be happy if it cost $700. Lots of people don't give a toss about Ray-Tracing, they think it's a waste of time. Some people would like their games that run at 4K@40fps now to just run at 4K@80fps instead.
But at some point the ride is going to stop on increasing performance and sometimes you just have to change the way things are done. Ray-tracing does that and it's an admirable goal.
The price increases suck and maybe people think it's a dud generation (maybe it will be). But things have to change some time.Vistar Shook and hmscott like this. -
-
As one person put it to me:
"If Nvidia renamed the 2080ti to a Titan people would be more accepting of the price. A Titan XP sells for £1,150 vs £1,100 for the 2080ti.
The cuda difference from 2080 to 2080ti is 47%.
GTX 1000 series it was 40%.
GTX 900 series it was 37%.
GTX 700 series it was 25%.
(The difference vs titan was 50%, 50% and 25% respectively)
This is the biggest performance difference, in raw cuda, that has ever existed between a ti model and a none ti - pointing to it being a titan. Do you have any thoughts on why they didn't just name this a titan? It seems like they've shot themselves in the foot - advertising a gaming card, that is actually a prosumer card, with a prosumer price."
Would you like to try again? -
Gotta remember though , the problem w/ ignorance is that it picks up confidence as it goes along, and when people start playing captain murica w/ a shield of ignorance, it just gets messy.
You can lead a horse to water, but there's no reason to pull their head up if they refuse to come up for air.
Anyways, BS aside, noone's looking for twice as fast, (yes it'd be great but) given the current leg room even 30% would be great. I'm hoping it turns out to be somewhat driver related but its turning out to be a pretty sad year since tech's all in the middle of swapping gears. -
If you need a laptop now then now is the time to buy. If you need a laptop in 6 months then you should have a launch and at least a little mature product to gague if it's right for you or not. It's an impossible question to answer if you just need to know if you would buy a 1080 now and then regret it later, but you know what's available now, how they run in current games, what their pricing and availability is and os on.hmscott likes this. -
Meaker@Sager Company Representative
Without even having the desktop cards benched fully, without inside information you can't even guess at the moment. Perhaps save the final judgement for the benchmarks.
-
For price brackets now, you're correct because everything is shifted "up" a model right from the start. Previously the 1080 released at 980Ti pricing then by the time the 1080Ti released everything was shuffled back.
Nvidia's pattern has always been to release "the fastest single GPU". It's possible the immediate release of the 2080ti is due to the Titan V already holding that crown and the regular 2080 can't beat it reliably.
Leaving room to release a fully unlocked chip also means they can keep releasing the "fastest" GPU a couple of times.
It wouldn't do anything for your average gamer, but it would still reinforce their mind-share by simply having the "fastest" GPU, regardless of cost. -
Same point for the 2070 compared to the 1080. By the time we are done, this gen looks like a turd with a gimmick attached that cannot be used for gaming unless you are satisfied with sub-60 frames in 1080p. It's ability to even reach that IS impressive, but that doesn't make it ready for prime time or something consumers should buy AT ALL. I praise what they accomplished, then I redicule what they are doing on price and how the cards will be used by consumers because that is where it matters for consumers. Casual gamers may care less, but it can be argued if you are casual, you shouldn't necessarily be spending your money on enthusiast products and that you would have an equally pleasurable experience on something with lower power, thereby freeing up cash for other things you may want.
Now, if Nvidia gave guidance that they will put out a card in between the 2080 and Ti, cutting down the Ti but giving a boost in performance that sits between the two cards, but only slightly more expensive than the 2080 (or shift down the cost of the 2080 and slide in that card at like $50 or $100 over intro price of the 2080), then consumers would gripe about the price, but would chill out. But, with the name shifting, I think people doubt that and think this is to just get more margins on the Ti, while trying to force that huge premium. That is a legit concern for consumers.
But, either way, if the 1080 Ti costs what a 2080 does, or is within the same ballpark, people will make their decision on that basis of which of those products will give the best bang for the buck. If DLSS has the same issues as the HW acceleration on Adobe, consumers will reject it, or at least some will. And as one video showed from UFD, the performance of the 1080 Ti in the games selected, with settings at high or max, already do 4K@60. So the benefit of buying the new over the old is lessened in a significant way.
One person mentioned the timing of exiting the crypto mining bubble wrong, that a AIB partner returned 300,000 chips to Nvidia, and that Nvidia could be sitting on a stock of Pascal right now. Doing other than a side grade would cause further depreciation, but they did a large write down prediction for Q3, and already wrote down some in Q2. So, instead of pushing forward with a new card with just tensors and higher cuda count (floating point units), doing a side grade and introducing a new tech, while pushing for adoption of their proprietary raytracing library in gameworks raytracing, and while having zero competition from AMD for the foreseeable future, while pricing it high enough to still clear that old inventory, makes a lot of business sense to me.
Also, you are GREATLY misconstruing the dies used. Titan Xp being 102, not 100, and the 1080 Ti being a 102, not a 100, shows that going Titan V as a 100 cut down is an extreme departure. They left no room, and the Titan Turing cut down TU100 chip will be a $3K card. What people liked about the fuller GP102 being the Titan Xp is that the slightly cut down xx80 Ti series would sandwich between that fuller 102 chip and the 104 xx80 chip, while being optimized for gaming performance and delivering about 30%, roughly, over the xx80 series. It is what they got used to.
Now, I understand that because the Titan is now the cutdown 100 chip, which is a much larger die and thereby has larger costs, as well as being the full Turing and the later Quadro flagship, you would want to charge $3K, about 1/3 the price. It makes it a larger costing halo product. Better margins and fewer wasted 100 dies in production. That is fine. But, you are screwing with consumer expectations on the 102 line by doing so. So, the 2080 Ti now sitting in the stack where the Titan used to for 102 dies, while waiting for defect level dies to stack to be able to release the cut down, people question whether such a cut down product will even exist. And that has the market nervous that they are cutting that product out, getting rid of defective 102s that used to make the Ti series (or, if yields were good enough, purposely gimped dies, which is fine), while taking the margins on the cut down 100 dies instead and pushing the costs of the stack up for everyone.
Also, consumers are not so stupid to throw money at something just because it is labeled the "fastest." (some are, but with wages going down, trade wars going on, etc., it doesn't make sense to the masses of consumers to waste money in that manner, especially since most would wait the 4-9 months to get the cut down 102 die Ti products to say such thing, not buy the Titan series, and when they see, by and large, that the 2080 Ti is in the Titan Xp spot, you will see lower sales).
But continue trying to obfuscate, people can look up what I've said for themselves. Also, as you said, it wouldn't do anything for gamers. So, you already see and know what is going on here. You sound like either a dyed in the wool fanboy or a PR rep for Nvidia. -
Further, you can compare the cuda count and memory bandwidth. Those should properly be given caveats, like the claimed shader revision. All of that is reasonable. What this does is to set the stage against what reviewers will be testing the hardware, so that they can meet their consumers' expectations in their reviews, not just Nvidia's expectations of their coverage. They are content creators and anyone that consumes their content on reviews are who they must keep coming back. Nvidia might threaten to not give review units. We've seen that before. Between their and AMD's actions, that is largely why Gamers Nexus no longer does those types of reviews with staged embargoes acting as adverts for products.
So, obviously, the final judgment is on the numbers, but that does not mean these discussion are not worth while, including shading and managing consumer expectations.
Nvidia RTX 20 Turing GPU expectations
Discussion in 'Sager and Clevo' started by Fastidious Reader, Aug 21, 2018.