I've been comparing nVidia cards and ATI cards lately for my laptop. Right now ATI holds more raw computing power and supports for DirectX11 (more future-proof gaming cards, IMO), while nVidia have a lot of games supporting it's PhysX technology but their newest card only support DirectX 10.1 and less number of shaders.
So:
- I'm wondering, when DirectX11 to be used generally in game, do more developers will hop to Direct Compute and leave the PhysX technology by nVidia gradually?
- CUDA vs Stream. I've seen more CUDA capability than ATI/AMD's stream like how it helps to perform rotation/scaling faster in Photoshop CS4. Does CUDA indeed a better GPGPU library than stream? Or simply Stream just doesn't get as many exposure to CUDA?
Opinion/comments?
I think this is on the correct section, but if mod feels this thread doesn't belong to this section, feel free to move it.
-
nVidia shaders operate differently than ATI's, so you can't directly compare them. The older conversion approximation was 5 ATI shaders to 1 nVidia shader, but you'd really have to see in game comparisons to differentiate between close models.
-
Being as we're talking laptops, you may also want to take power into account... an ATI card uses MUCH less power for the performance than an Nvidia card.
-
That's because nVidia slacked and used larger manufacturing processes without changing their design. nVidia's newer 40nm parts are much more power efficient than the previous generations.
-
thinkpad knows best Notebook Deity
I've heard, for the amount of raw power it takes to actually fully run DX11 and still be able to turn up the settings, DX9 and 10 will be here for a while, at least while the X360 stays, which it will for a couple more years, as all of it's rendering is DX9. DX11 has minimal beneffits over 10 right now IMO, it just introduces better liquid physics and better physics in general of things that are hard to simulate well for GPU's, like flags, water, trees swaying.
-
H.A.L. 9000 Occam's Chainsaw
-
So, I guess for now, DirectX 11 features won't be that much useful, provided mobile graphics cards have less power than desktop's.
How about the other non-game applications ? I don't really see the hype for ATI's GPGPU capability, but I see a lot of CUDA stuff (at least on nVidia site) -
-
H.A.L. 9000 Occam's Chainsaw
-
-
H.A.L. 9000 Occam's Chainsaw
Oh, NVIDIA and their re-badging!
-
Nvidia is in the same predicament with their gpu's like AMD is with their cpu's.
-
-
That's cool.
What's the research about, if I may know? -
H.A.L. 9000 Occam's Chainsaw
-
Nvidia's GT200 while doesn't have much improvement over the G98 core in terms of regular gaming features, it does include improvement in terms of CUDA-capable computing unit. Nvidia probably saw the potential of the money they can make from CUDA and neglected the gaming features a little bit.
OpenCL is included in both CUDA and Stream libraries too. But in terms of capability, CUDA is simpler compare to Stream, at least according to CUDA people (huge possibility of being bias). Either way, it's a fact that CUDA is ahead of ATI/AMD stream right now. More software are taking advantage of CUDA compare to Stream. The capability is there for stream, but it seems that they're not marketed properly. -
Any other GPGPU questions you guys have, fire at me since I may be able to answer them, or my supervising professor will. -
Even if CUDA is out earlier, it locks you into Nvidia. OpenCL is agnostic, and should be much more forward-compatible with hardware. You can put whatever is the fastest hardware at the time into your system and it'll run. With CUDA, you're going to be stuck with whatever is the fastest Nvidia hardware.
-
-
-
Technological advances in software typically lag technological advances in hardware. I believe GPGPU will be limited to specialized applications until the mainstream user adopts more powerful GPUs. Even on a mainstream dedicated *desktop* graphics card, GPGPU performance isn't too impressive, particularly if your application requires double precision. Not denying that GPGPU is becoming more common. The point is just that your average Joe's graphics hardware will remain so weak (for the next 2, 3, or 4 years) that GPGPU will not be critical to his computing experience.
The spreading of platform-independent interfaces such as OpenCL (or DirectCompute if we restrict ourselves to Windows) is inevitable. In a few years, everyone will have GPUs that are capable of OpenCL or DirectCompute. At that point, I see little reason for most developers to target vendor-specific APIs. -
H.A.L. 9000 Occam's Chainsaw
Also, I believe if Apple is as serious as I think they are at implementing GPGPU via OpenCL in Snow Leapord and beyond, it could be a very good thing for OpenCL. All it really needs is for Apple to show the public a good way to apply OpenCL to the mainstream for video encoding/decoding or accelerating various functions of Apple code and it's in the bag for them. I think that if they get a successful app that actually uses OpenCL and people see how much faster it really is at certain tasks... somebody's bottom line is going to look very rosy. And everyone knows.... like it or not, that a LOT of companies copy Apple's every move.
-
-
H.A.L. 9000 Occam's Chainsaw
-
Need more facts about ION 2 before we know if it's possible or not but if it works it would deal with the problem of being locked into buying the fastest Nvidia hardware. -
CUDA was stable before OpenCL so Nvidia has a head there but now are start to come stuff for OpenCL too. See for example Luxrender :
http://www.luxrender.net/forum/viewtopic.php?f=13&t=3439
The GPU difference is due mainly to the shaders numbers from what i learned, and it will be one of most important values to check in the future buys.
You want to render your AVCHD holiday movie and it will need some sort of rendering acceleration. For example the simple $39 video correcting http://www.vreveal.com/ uses CUDA. PowerDirector already uses CUDA and for AMD needs Avivo. Next Adobe premiere will have acceleration apparently only CUDA but OpenCL will come too.
GPGPU in the near future
Discussion in 'Hardware Components and Aftermarket Upgrades' started by VZX, Feb 5, 2010.