The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous page

    A question on Mobo, Graphic Cards and processors?

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by True_Sultan, May 29, 2010.

  1. woofer00

    woofer00 Wanderer

    Reputations:
    726
    Messages:
    1,086
    Likes Received:
    0
    Trophy Points:
    55
    This thread is turning out to be more interesting than the "halp halp my laptops on fire" and "why ssd > hdd :drool:" threads that populate this subforum.

    FLOPS dont translate directly into gaming prowess. From what I've been reading the last couple days, it seems like ATI's been adding more muscle to its cards and mostly ignoring the GPGPU game, while nVidia's been working on making better tools and a more flexible architecture for CUDA. Few, if any, people seem to want to work with Brook/Stream. From the forums I've skimmed, it looks like CUDA discussion relate to optimization and enabling new capabilities, while Brook/Stream discussions tend to be requests for help debugging lower level code. Now, I don't think this is because the ATI cards are any less capable. It just looks like nVidia got its foot in the door first and built a very developer friendly architecture that people wanted to improve, while ATI focused on making its card faster and never saw the boat leaving.

    Some people love OpenCL because of the fact that it's open source and will (mostly) work freely between the platforms. Performance on the functions that do work properly in OpenCL seems to be neck and neck. Problem is, there isn't much OpenCL does right just yet, and documentation isn't very good. Since I like dumbing things down, I'm going to say it's something along the lines of Windows7 vs. Linux 10 years ago. OpenCL clients are definitely very capable, but one well-known project tends to favor CUDA for ease of programming. BOINC project list here should give you an idea of the nVidia/CUDA tilt.

    OpenCL ultimately has plenty of potential, but the majority of GPU-optimized distributed computing projects I've seen leave OpenCL for the crazies (like dnetc)
     
  2. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    Ah Interesting, I see CUDA having more projects. Interesting. But doesn't that the ATI CARDS have more power? So why do so many people choose Nvidia Cards? I mean CUDA is fine, Open CL got alot of potential, I've seen some amazing stuff with it on Youtube. Plus isn't there Direct X Compute? So why is ATI behind in GPGPU or any GPU super computiing? I haven't seen any Nvidia GPU (commerical) for desktop or laptop exceeding at least 0.1 teraFLOPS. Even for moblie processors, like the GTX 480M which is apparently the best Nvidia Mobile GPU, it has only 598GigaFLOPS. (link: http://www.nvidia.com/object/product-geforce-gtx-480m-us.html). The Mobile HD 4850 has like 1.12TraFLOPS. ATI has the power in its bag, Nvidia doesn't. So which gfx card would u guys buy then? (Assuming u are a power user or engineering student, who needs power.,.also loves gaming) :rolleyes:
     
  3. lackofcheese

    lackofcheese Notebook Virtuoso

    Reputations:
    464
    Messages:
    2,897
    Likes Received:
    0
    Trophy Points:
    55
    598GFLOPS = 0.598 TFLOPS > 0.1TFLOPS, by the way.

    OpenCL and DirectCompute are rather new developments, and that's the main reason there isn't much usage of those yet - CUDA has already been around for three years, while OpenCL SDKs came in around the end of last year. However, in future I think they will become more common simply because it makes good sense to develop for both Nvidia and ATI cards if possible.
     
  4. woofer00

    woofer00 Wanderer

    Reputations:
    726
    Messages:
    1,086
    Likes Received:
    0
    Trophy Points:
    55
    Your obsession with pure numbers and theorycrafting is leading you straight down the wrong path. Tech demos on YouTube don't mean squat and should mostly be ignored. Duke Nukem Forever had tech demos back in 2001, and look how well those predicted the future. Current gaming performance and quality on the GeForce and Radeon cards is roughly equivalent. Both of them ultimately arrive at the same place regardless of the means to get there.

    I'm hesitant to really try to go into more depth into CUDA vs. OpenCL vs. DirectCompute, but generally speaking, CUDA gets active support from nVidia, while MS doesn't give a damn about DirectCompute and ATI keep killing Stream. OpenCL works on both nVidia and ATI, but any programmer with half a brain goes with CUDA when he can choose to. Adding clockspeed to a well-built lower-level architecture is trivial compared to forcing a high clockspeed to work well on a rickety, essentially still beta language. OpenCL may have tons of potential, but at the moment OpenCL code is simply fugly, whereas CUDA looks a whole lot like C code that just about any competent programmer can recognize and code for. To get to where nVidia is now on CUDA, ATI would have to go back a few years and redo Stream and actually support it, so it instead just pumps out stronger hardware.

    I hate to say this, but I don't think you're ready to ask about which card is best for your far-in-the-future needs. FLOPS numbers mean absolute jack. Gaming-wise, they're the same. Engineering-wise, the question is MobilityFirePro vs. QuadroNVS, not GeForce vs. Radeons. "Power user" is a cop out term. Based on the questions you're asking and some of the conclusions you keep leaping to, I'm going to guess it's going to take a few years to get to the level of Computer Engineering and Programming know-how where the differences will truly matter. By then, 1) the available options will have changed radically, 2) you'll have learned enough to already know which you need and 3) you'll have access to better, more specialized resources that NotebookReview forums.
     
  5. lackofcheese

    lackofcheese Notebook Virtuoso

    Reputations:
    464
    Messages:
    2,897
    Likes Received:
    0
    Trophy Points:
    55
    Indeed, raw floating-point performance is an extremely poor indicator of performance when comparing across different architectures.

    As for CUDA vs OpenCL, I don't know what it's like to program in either one, but it seems to me that expanding one's available user-base from just Nvidia card owners to Nvidia and ATI card owners would definitely be the biggest factor overall.
     
  6. Duct Tape Dude

    Duct Tape Dude Duct Tape Dude

    Reputations:
    568
    Messages:
    1,822
    Likes Received:
    9
    Trophy Points:
    56
    OpenCL also interfaces with CPUs. The goal of it is to create a homogeneous environment for a program to run on, regardless if someone has a single core Atom with integrated Intel graphics, or if someone has a Tesla machine with several hundred shaders. There is a bit of a performance cost going from CUDA to OpenCL, however.
     
  7. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    Oh I see, so in the near future I will know what to use, I mean I also questioned myself to why they used old processors, but look it still had the potential and capacity to do what even a future grade CPU can (doesn't matter of run time) plus it is fitting for any environment. Oh so since ATI messed up there STREAM, they just decide to muscle up there cards to compensate? I also got the notion that (which I kinda agree) even if the FLOPS differ on each card, they both perform the same? Maybe the Nvidia GPU opts to use a lower FLOP because there architecture. (like at a lower flop the Nvidia architecture is made to perform the same as an ATI GPU at a higher FLOP)?

    hmm, well lol I always thought facts were everything...I guess I'm wrong LMAO

    Wait so OpenCL also uses the CPU to compute, while CUDA uses the GPU to fully compute? :cool:
     
  8. lostbuyer

    lostbuyer Notebook Consultant

    Reputations:
    7
    Messages:
    101
    Likes Received:
    0
    Trophy Points:
    30
    Yeah, I'm gonna disagree with woofer00's statement that "any programmer with half a brain goes with CUDA" - any project that has a working CUDA implementation and half a brain will stay (barring a sudden need for cross-platform capability), but for new projects there are real potential advantages. And there are cases in which the latest OpenCL implementations can beat CUDA, so the performance cost thing, while largely true, is getting into the "depends what you're doing" area. As this grows and nVidia puts more effort into OpenCL support, even nVidia-focused labs may start switching over. [I believe at least one of the research groups where I am has been doing mostly OpenCL for a bit now despite being exclusively nVidia-driven.]


    *massive, long suffering sigh* The problem is that the facts need to be relevant. Larger numbers in one test or even many tests do not mean better performance in all specific situations. If a card is able to do things in a more efficient way than its technically faster competitor, it doesn't matter that it does those computations somewhat more slowly - it will get to the end result faster, and that's the only thing that matters.


    This isn't /., but let's have a [crappy] car analogy:
    Graphics cards are not completely interchangeable, because they don't support exactly the same instruction sets. If your computation is a road trip, the graphics card is both a car and the road on which it must drive.

    Let one card be a Honda Accord, with a straight road from start to finish.
    The other is a Ferrari, but the road is a mountain path with switchbacks the whole way there.

    The Ferrari is a better car in every way, but given the path it has to take (the specific calculations it must make to compute what you want), it will come in second place. Every time. For this task, you want the Honda.
     
  9. woofer00

    woofer00 Wanderer

    Reputations:
    726
    Messages:
    1,086
    Likes Received:
    0
    Trophy Points:
    55
    You can pick and choose specific words from my quotation if that makes your argument work well, but at least keep it in some context. That phrase was in the middle of a paragraph about fugliness of code and ease of development.
    I didn't state as fact that CUDA is better in all ways. GENERALLY speaking, CUDA is more mature than OpenCL and most programmers prefer to code in a language that favors clarity. On the other hand, there's still the matter of hardware compatibility and possible access to a broader set of hardware.

    Yes, nVidia supports both programs, and is actively trying to make OpenCL more compatible with its cards. OpenCL code for the most part runs perfectly well on nVidia cards, albeit more slowly. Either in this thread or another, I forget, I provided links to projects where either ATI or nVidia had a huge leg up. Those were mostly due to the choice of CUDA vs. OpenCL. OpenCL can blow the pants off CUDA, and CUDA can do the same to OpenCL, but getting to that point is much easier with CUDA. nVidia has supported OpenCL for a very long time, about as long as ATI has, but they still throw more weight behind CUDA. If OpenCL gets to a point where the number of man-hours required to achieve the same result as with CUDA becomes similar, I'm sure plenty of labs will switch over.

    *unnecessarily prolonged pained groan* He was asking a legit question and seeking more information. Why the ego?
     
  10. lostbuyer

    lostbuyer Notebook Consultant

    Reputations:
    7
    Messages:
    101
    Likes Received:
    0
    Trophy Points:
    30
    Sorry, I'm not personally a huge fan of CUDA, although I do see your point in general. Didn't mean to mischaracterize your post, I assume that a reader would have read your post in full; I merely intended to disagree with that particular comment (and specifically not the rest of it). The remainder of what I said was in response to dtd00d's "There is a bit of a performance cost going from CUDA to OpenCL, however." statement.

    I realize now that what was meant to be a response to who was entirely unclear and I should have made it clearer. It made sense to me at the time (because I mentioned your line and quoted dtd00d), but that was rather stupid and probably added to confusion if anything. :eek:

    I've been responding to his posts in various places around NBR for a while and there has been a fairly large amount of missing of the point / ignoring then re-requesting advice, as well as repeated, slightly varied questions on the same exact point (graphics cards in particular, from day 1). I interpreted his remark in that context; if I misread, T_S, I apologize.
    That said, snarkiness helps nobody; I'm just bad at taking the "just stop responding" approach. :eek:


    [Also, huh. I was able to reply and preview this several times, but on the third preview, the forum suddenly logged me out. To repeat a question of my own . . . has anyone in this thread had similar issues?]
     
  11. True_Sultan

    True_Sultan Notebook Evangelist

    Reputations:
    12
    Messages:
    443
    Likes Received:
    0
    Trophy Points:
    30
    Hey dude's Im back. Say hello to ur little friend :)

    Ahh so it pretty much depends on the way the card operates. So even though in a white sheet the ATI cards are faster, but in a real world situation the nvidia gpu archetecture can process at equal or greater speed. :)

    Oh lostbuyer I'm sorry If I miss a point like really :swoon:

    Also I had no issues in logouts during reply. And oh nice, well it all depends on the project and yes I saw the link, most project was for Nvidia CUDA. It is more implemented but really it depends on the user who programs it and the updates that goes with it. I mean take an ATI gpu and a similarly speced Nvidia GPU, now a good programer can do equal amount of work using OpenCL and CUDA correct? So I believe in ur points. I see that whatever is more up to date and faster and cost less, lab's will go there. but for the sake of my OP, I believe that CUDA or OpenCL won't help at all unless the GPU are super-scalar or such, or unless the programs support CUDA or OpenCL implementations :)

    So Am I forgetting something? :twitchy:

    Man I still don't have a lappy :(
     
← Previous page