This thread is turning out to be more interesting than the "halp halp my laptops on fire" and "why ssd > hdd :drool:" threads that populate this subforum.
FLOPS dont translate directly into gaming prowess. From what I've been reading the last couple days, it seems like ATI's been adding more muscle to its cards and mostly ignoring the GPGPU game, while nVidia's been working on making better tools and a more flexible architecture for CUDA. Few, if any, people seem to want to work with Brook/Stream. From the forums I've skimmed, it looks like CUDA discussion relate to optimization and enabling new capabilities, while Brook/Stream discussions tend to be requests for help debugging lower level code. Now, I don't think this is because the ATI cards are any less capable. It just looks like nVidia got its foot in the door first and built a very developer friendly architecture that people wanted to improve, while ATI focused on making its card faster and never saw the boat leaving.
Some people love OpenCL because of the fact that it's open source and will (mostly) work freely between the platforms. Performance on the functions that do work properly in OpenCL seems to be neck and neck. Problem is, there isn't much OpenCL does right just yet, and documentation isn't very good. Since I like dumbing things down, I'm going to say it's something along the lines of Windows7 vs. Linux 10 years ago. OpenCL clients are definitely very capable, but one well-known project tends to favor CUDA for ease of programming. BOINC project list here should give you an idea of the nVidia/CUDA tilt.
OpenCL ultimately has plenty of potential, but the majority of GPU-optimized distributed computing projects I've seen leave OpenCL for the crazies (like dnetc)
-
-
598GFLOPS = 0.598 TFLOPS > 0.1TFLOPS, by the way.
OpenCL and DirectCompute are rather new developments, and that's the main reason there isn't much usage of those yet - CUDA has already been around for three years, while OpenCL SDKs came in around the end of last year. However, in future I think they will become more common simply because it makes good sense to develop for both Nvidia and ATI cards if possible. -
I'm hesitant to really try to go into more depth into CUDA vs. OpenCL vs. DirectCompute, but generally speaking, CUDA gets active support from nVidia, while MS doesn't give a damn about DirectCompute and ATI keep killing Stream. OpenCL works on both nVidia and ATI, but any programmer with half a brain goes with CUDA when he can choose to. Adding clockspeed to a well-built lower-level architecture is trivial compared to forcing a high clockspeed to work well on a rickety, essentially still beta language. OpenCL may have tons of potential, but at the moment OpenCL code is simply fugly, whereas CUDA looks a whole lot like C code that just about any competent programmer can recognize and code for. To get to where nVidia is now on CUDA, ATI would have to go back a few years and redo Stream and actually support it, so it instead just pumps out stronger hardware.
I hate to say this, but I don't think you're ready to ask about which card is best for your far-in-the-future needs. FLOPS numbers mean absolute jack. Gaming-wise, they're the same. Engineering-wise, the question is MobilityFirePro vs. QuadroNVS, not GeForce vs. Radeons. "Power user" is a cop out term. Based on the questions you're asking and some of the conclusions you keep leaping to, I'm going to guess it's going to take a few years to get to the level of Computer Engineering and Programming know-how where the differences will truly matter. By then, 1) the available options will have changed radically, 2) you'll have learned enough to already know which you need and 3) you'll have access to better, more specialized resources that NotebookReview forums. -
Indeed, raw floating-point performance is an extremely poor indicator of performance when comparing across different architectures.
As for CUDA vs OpenCL, I don't know what it's like to program in either one, but it seems to me that expanding one's available user-base from just Nvidia card owners to Nvidia and ATI card owners would definitely be the biggest factor overall. -
OpenCL also interfaces with CPUs. The goal of it is to create a homogeneous environment for a program to run on, regardless if someone has a single core Atom with integrated Intel graphics, or if someone has a Tesla machine with several hundred shaders. There is a bit of a performance cost going from CUDA to OpenCL, however.
-
-
This isn't /., but let's have a [crappy] car analogy:
Graphics cards are not completely interchangeable, because they don't support exactly the same instruction sets. If your computation is a road trip, the graphics card is both a car and the road on which it must drive.
Let one card be a Honda Accord, with a straight road from start to finish.
The other is a Ferrari, but the road is a mountain path with switchbacks the whole way there.
The Ferrari is a better car in every way, but given the path it has to take (the specific calculations it must make to compute what you want), it will come in second place. Every time. For this task, you want the Honda. -
I didn't state as fact that CUDA is better in all ways. GENERALLY speaking, CUDA is more mature than OpenCL and most programmers prefer to code in a language that favors clarity. On the other hand, there's still the matter of hardware compatibility and possible access to a broader set of hardware.
Yes, nVidia supports both programs, and is actively trying to make OpenCL more compatible with its cards. OpenCL code for the most part runs perfectly well on nVidia cards, albeit more slowly. Either in this thread or another, I forget, I provided links to projects where either ATI or nVidia had a huge leg up. Those were mostly due to the choice of CUDA vs. OpenCL. OpenCL can blow the pants off CUDA, and CUDA can do the same to OpenCL, but getting to that point is much easier with CUDA. nVidia has supported OpenCL for a very long time, about as long as ATI has, but they still throw more weight behind CUDA. If OpenCL gets to a point where the number of man-hours required to achieve the same result as with CUDA becomes similar, I'm sure plenty of labs will switch over.
-
I realize now that what was meant to be a response to who was entirely unclear and I should have made it clearer. It made sense to me at the time (because I mentioned your line and quoted dtd00d), but that was rather stupid and probably added to confusion if anything.
That said, snarkiness helps nobody; I'm just bad at taking the "just stop responding" approach.
[Also, huh. I was able to reply and preview this several times, but on the third preview, the forum suddenly logged me out. To repeat a question of my own . . . has anyone in this thread had similar issues?] -
Hey dude's Im back. Say hello to ur little friend
Also I had no issues in logouts during reply. And oh nice, well it all depends on the project and yes I saw the link, most project was for Nvidia CUDA. It is more implemented but really it depends on the user who programs it and the updates that goes with it. I mean take an ATI gpu and a similarly speced Nvidia GPU, now a good programer can do equal amount of work using OpenCL and CUDA correct? So I believe in ur points. I see that whatever is more up to date and faster and cost less, lab's will go there. but for the sake of my OP, I believe that CUDA or OpenCL won't help at all unless the GPU are super-scalar or such, or unless the programs support CUDA or OpenCL implementations
So Am I forgetting something?
Man I still don't have a lappy
A question on Mobo, Graphic Cards and processors?
Discussion in 'Hardware Components and Aftermarket Upgrades' started by True_Sultan, May 29, 2010.