So if Nvidia say ampere has '2nd gen' tensor cores and Turing has only 1st
1)What do "faster tensor cores" amount to? Suppose GPU A with Tensor cores upscales using DLSS and GPU B has the same rasterised performance but 'better' tensor cores, what do you expect the difference to be between them? More FPS with DLSS on ? More accurate upscaling with lower resolutions ?
2)While DLSS reconstructs a low-res image rather than just scale it.. But isn't the result similar to, say resolution scaling with TAA+ smart image sharpening(Amd RIS, nvidias Freestyle sharpen filter etc) ? Technically, pixels are being approximated and the missing information is 'guessed' by both ! please correct me if I'm wrong here.
3)The DLSS 1.0 implementation in early games wasn't great, but nvidia said they weren't using tensor cores for that--everything was done on regular shader cores! They promised DLSS 2.0 and beyond would be on tensor cores so it'll be superior,and it is but technically it goes to show that DLSS doesn't need specialized hardware or am I missing something here?
4)AMD has an upcoming open source direct ML feature that allegedly doesn't need specialized hardware? So either they worked out a way where machine learning algorithms doesn't need tensor instructions to emulate so no hardware requirement, or they just proved that regular fp32/int32 cores can handle tensor workflow?
This is becoming a pattern with Nvidia-- First they release proprietary hardware for variable refresh(g-sync) and AMD freesync simply used the HDMI controller, then they go on to adopt it.
Then they were all about RTX until cryengine could do it without needing RTX cards.
Now are we seeing the same thing with DLSS?
-
Kunal Shrivastava Notebook Consultant
Tensor cores and DLSS(non-rasterised performance) of Nvidia Ampere vs Turing?
Discussion in 'Gaming (Software and Graphics Cards)' started by Kunal Shrivastava, Dec 24, 2020.