The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    graphics card battle for VIDEO EDITING

    Discussion in 'Hardware Components and Aftermarket Upgrades' started by teun3sixty, Oct 21, 2012.

  1. teun3sixty

    teun3sixty Newbie

    Reputations:
    0
    Messages:
    9
    Likes Received:
    0
    Trophy Points:
    5
    So the specs of my new laptop are pretty much set in stone, except for the
    GPU. Now i have done tons of research concerning CUDA vs opencl but i can't draw my own conclusion regarding whats best suited for my editing style.
    The 3610qm, 16gb ram, ssd+hdd is fairly self explanatory. I use C4d for 3d modeling and destruction effects with realistic particle simulations. I use After effects for a lot of effects in one composition(trapcode form, particular, video copilot fx come to mind). And 3dsmax for professional modeling(big environments).

    Now, viewport performance in after effects is very important to me, thats the main game. I mostly make compositions which consist out of lots of stock footage and multiple heavy effects. After effects uses opengl as the viewport render, but the latest cs6 doesnt support ati in its ray trace engine. But i am sure that in the near future opencl will be included as well. But I want a viewport performance boost in after effects. Will a dedicated gpu even do this?

    i was looking at the gtx 675m, is quadro that much better? I would go for the 7970m if there is a big difference in viewport, but the thing is, i dont know which cards are good for this matter.
     
  2. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    ..since the 650m (or 660m/640m) cards are officially supported now (thanks to the macbook retina), it should probably be the most compatible, fastest and most painless card to use for the time being. Specially if you rely on after effects and multiple layers, ray tracing engine, etc. Because as you say, they don't officially have all round support for OpenCL yet, for all the functions. Some of them are supported, and can typically run faster than the cuda versions. And technically, there's no reason that routines based on cuda shouldn't be able to reach at least the same speed on OpenCL.

    But.. the question is what sort of structural changes has to be done to actually complete that transition, and if they at all have the incentive to do so, other than as an experiment. It's also a question of how stable it will be - which is immediately interesting if you actually are using this for a living.

    But when thinking about the ridiculous OpenCL performance on the APU devices so far. Even when they run on extremely low clock-speeds, for example, it's a very interesting proposal. You could realistically be looking at real-time editing at higher performance than what laptops currently can do up towards 680m level -- on a .. er.. "not an ultrabook" chassis, running happily on two copper-wires in a lemon for a day.. If that OpenCL performance was used. But again -- is there any reason for adobe to aim for that? And if there was, when would that actually be done? This year? Next year?

    So practically, if you're going for something heavier than photo-editing and some film-encode, it's not a good idea to go for something else than a cuda card. Yet. Performance on a 680m... as well as a 650m, by a slim margin, I think.. is higher than on a quadro4000. And the quadro cards should not really be interesting now that the drivers are so well supported for the newer cards. Also, kepler w/dynamic power-saving is absolutely awsome.

    It's difficult to find a reasonably priced and well built laptop with a 680m, though.. But you really should go for something like that if you wanted the best performance possible. If that's not such a huge issue, and you think the gk107 chip should be enough (650m, 660m, etc), then - for that use - you have a lot of different laptops to choose from, that will serve you well, I think.
     
  3. teun3sixty

    teun3sixty Newbie

    Reputations:
    0
    Messages:
    9
    Likes Received:
    0
    Trophy Points:
    5
    this is a video example of mine Naruto: special FX scenes - YouTube, the scenes are really cpu heavy, because of the particle systems and amount of visual effects. The viewport performance was horrible when i made this video(amd quad x64 940 @3.2ghz). the new i7 cpus in laptops are wonderful, and since after effects uses opengl, what would be the difference in the viewport(instant result after any setting change) from an intelhd4000 to lets say a gt650m. Or the difference between gt650m and gtx675m. The gtx675m has same cuda cores, but the Memory Interface Width is 256bit compared to the gt650's 128bit, and the fermi's cuda cores are faster then the individual cuda core in the kepler cards. But next to the ray traced effects in cs6, the opengl main renderer its using. isnt the 7000 series from ati amazing at opengl, gpgpu. Premiere now support opencl completely too, so this cuda support from cs6 only with the ray tracing engine doesnt sound so important to me, as i will also do modeling in 3dsmax and cinema4d. I am very stuck when it comes to selecting an ati or nvidia card.
     
  4. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    Nice. Those two explosions on the beach were really good :D

    ..anyway. Definitely see your problem. So.. if Premiere does support Opencl fully for those functions you will be using now - then choosing AMD or Nvidia shouldn't be a problem, it seems to me. But I don't sit on the latest versions or two top cards of either brand, and don't really know how to scale any of the run-time effects, or what your requirement really is - so I can't tell for sure :)

    I guess the question is what sort of performance you actually need to run the effects you want in real-time. It's already a long way up to desktop performance after all. And if you will get the flow you expect when editing, when gpu-acceleration is enabled.

    A few random things - the 650m is 20-30% below the 675m. Can expect lower minimum fps on full load - question is how much lower.. . An intel hd4000 has about 20% of the performance of the 675m. For all I know either nvidia card or a similar ati card could be good choices. APU systems are still up in the air, though, that was the thing, because of the ridunculous OpenCL performance. Just the 3d performance on a reasonably set up a10/quad is on level with a quadro 2kM, for example. While the OpenCL performance is much higher. /Might/ actually be good enough for specific tasks if the program is set up well, and you don't put on too many layers. In the same way, you could still croak the 675m - it's a long way up to a desktop system.

    The bandwidth and so on isn't really such a big concern now. It would be important if you wanted a relatively quiet and cold card and wanted the best constant throughput (on lower clock-frequencies). But since either of the other cards are so much faster now, and will probably maintain minimum throughput as well -- it's really a different question. I.e., you might get better performance, and if the cooling is good enough - why not a Fermi card? On the other hand, a 650m/kepler is relatively modest, and will drop down when not used fully, so should be more comfortable. Biggest concern here is the driver support and lack of issues and glitches. Which of course would be less - both glitches and heat peaks - on a quadro 4kM.

    ..So. I wonder if minimum fps actually isn't a problem on the 675m. And if so, it very likely won't be on the 650m either. Would be interesting to hear if someone actually have tried it lately, though :) And can tell what sort of amount of layers and effects is too much, etc. Which I haven't. So I shouldn't be yapping so much, really.
     
  5. ViciousXUSMC

    ViciousXUSMC Master Viking NBR Reviewer

    Reputations:
    11,461
    Messages:
    16,824
    Likes Received:
    76
    Trophy Points:
    466
    CPU and RAM are always going to be more important, the GPU has only one main function and that is to display what your working on. The only benefit you can get from a GPU is some hardware acceleration but this depends on your software your using if it supports it, and if it does usually the features supported are limited and more fluff than of use, and even when encoding the codec your encoding needs to support it if you want to make use of it.

    Any of those features can easily be taken over by the CPU, and the other 95% of things the GPU cant help with will still be relying on your CPU so keep that in mind.

    Also if your working on large formats you may need to make sure your storage medium is up for the task in both capacity and speed.
     
  6. maverick1989

    maverick1989 Notebook Deity

    Reputations:
    332
    Messages:
    1,562
    Likes Received:
    22
    Trophy Points:
    56
    Um I'm not soo sure about this. The compute power of GPUs is being increasingly utilized in video and image processing software. CS6 makes heavy use of OpenCL. Depending on the software, the GPU can be anywhere from barely useful to highly necessary.
     
  7. Krane

    Krane Notebook Prophet

    Reputations:
    706
    Messages:
    4,653
    Likes Received:
    108
    Trophy Points:
    131
    This is the classic case of the horsepower of a gasoline engine vs the torque of a diesel. If you're going to game with need the jackrabbit quickness of a gaming card. However, if you're going to edit i.e. render, etc., then you need that continuous powerful grunt and continuous pulling power of a diesel. That is, those card with the "k" designation

    Sure, you can edit with a gaming car (and you can game with an editing card), but for serious users, and the need to continuous render, you're better off in the long run with getting the best card for your needs. Still, these cards are the "money maker" variety, so unless you have deep pockets, and/or do lots of work, it may be more prudent to go with the GTX variety instead.

    If you have Adobe, or Autodesk get Nvidia CUDA. For Mac hardware and/or Sony Vegas software, look to Open CL.
     
  8. maverick1989

    maverick1989 Notebook Deity

    Reputations:
    332
    Messages:
    1,562
    Likes Received:
    22
    Trophy Points:
    56
    Again, Adobe CS uses OpenCL.
     
  9. Krane

    Krane Notebook Prophet

    Reputations:
    706
    Messages:
    4,653
    Likes Received:
    108
    Trophy Points:
    131
    I know. But before that they gave exclusive rights to Nvidia, and only on a selected number of pro cards--which is still their primary optimized graphics card.

    It just opens opens the playing field to other manufactures and markets. If I'm not mistaken, this was just added in version CS6.
     
  10. trvelbug

    trvelbug Notebook Prophet

    Reputations:
    929
    Messages:
    4,007
    Likes Received:
    40
    Trophy Points:
    116
    Agree.
    And although viscous is partly right when he says it's the cpu and media storage that does most of the rendering work, it's the gpu and it's associated acceleration functions that programs such as ppro and ae use to render previews.
    If you've ever worked on ae/ppro with hd format plus particle effects, multiple layers, changing angles and perspectives, nesting ae vids in ppro, etc, even a 680m can be brought to it's knees when rendering previews.
    My suggestion to the OP, if you value real time edits (or something as close to it as a possible), get the best gpu you can afford. If all you care about is rendering speed, stick with the best cpu you can afford. In both scenarios, you will need fast media storage (preferably two separate hdd/ssd one for source, one for edit) and lots of ram(16gb or more).
    Good luck.
    Sent from my GT-I9300 using Tapatalk 2
     
  11. Krane

    Krane Notebook Prophet

    Reputations:
    706
    Messages:
    4,653
    Likes Received:
    108
    Trophy Points:
    131
    The CPU plays the video (AVCHD is particular cpu intensive) but the GPU handles effects, color correction, rendering, etc. For all intents and purposes you won't gain a tenth as much from upgrading your CPU as you would from upgrading your GPU. For modern editors, the GPU is where the money needs to go.
     
  12. trvelbug

    trvelbug Notebook Prophet

    Reputations:
    929
    Messages:
    4,007
    Likes Received:
    40
    Trophy Points:
    116
    The cpu still does almost all production rendering. Gpu's are notorious for subpar rendering, especially cuda. In fact the only worthwhile gpu rendering engine is quick time- which is still cpu based (integrated gpu).
    Gpu's are important for accelerated renders of previews. Imho, as important since no one wants to preview his video work as a slide show.


    Sent from my GT-I9300 using Tapatalk 2
     
  13. Krane

    Krane Notebook Prophet

    Reputations:
    706
    Messages:
    4,653
    Likes Received:
    108
    Trophy Points:
    131
    Not quite, the CPU is only essentially assigned rendering tasks when your GPU is below specifications (At least for video editing, or if you turn it off). Otherwise, your software will detect the qualifying graphic device and utilize the most efficient method to get the job done.

    As for sub par rendering, assuming there's no malfunction in the pipeline, that is more likely the result of a poor performance and settings configuration, than any inferiority directed towards the graphics card.
     
  14. niffcreature

    niffcreature ex computer dyke

    Reputations:
    1,748
    Messages:
    4,094
    Likes Received:
    28
    Trophy Points:
    116
    You obviously need to buy a quadro k5000m from me, its the only way to solve all your problems. :D
     
  15. Krane

    Krane Notebook Prophet

    Reputations:
    706
    Messages:
    4,653
    Likes Received:
    108
    Trophy Points:
    131
    Since you mentioned it, I've been searching. The result so far, including those from Adobe favor GPU rendering 3 to 1. Still, the one bothers me. I shall continue to investigate until I find out precisely in what instance or under what circumstances the "one" is the case. If there's a particular condition I'm missing, I need to know about it.
     
  16. ViciousXUSMC

    ViciousXUSMC Master Viking NBR Reviewer

    Reputations:
    11,461
    Messages:
    16,824
    Likes Received:
    76
    Trophy Points:
    466
    People prefer GPU rendering 3 to 1 because "smart" tech users are outnumbered 20 to 1, cant tell you how many "professionals" I have run into on the field that dont know jack about rendering or editing, they lose so much quality in the chain they use and have the stuff I talk about like b frames, codecs, bitrate, etc is completely foreign to them.

    Never had any issues with previews and other things in full 1080P footage using a quad core cpu and 16GB of RAM, I have only seen my GPU used on a few features of CS6 photoshop & premier, not the majority.

    Just be real most people are laymens, and only know the basics or the things taught to them, witch were in most cases somebody else who was just a laymen user.
     
  17. Krane

    Krane Notebook Prophet

    Reputations:
    706
    Messages:
    4,653
    Likes Received:
    108
    Trophy Points:
    131
    So people keep claiming. But remember, most renders are a compromise between quality and medium for presentation.
    That's because you most likely haven't pushed it hard enough; and for your purposes, you may never get to that point. After all, how many people actually need to drive their cars anywhere near 80,90, or 100% of its capability?

    Nevertheless, I guarantee you that if you get to the point where you're stacking multiple effects, videos and/or color correction, or 3D ray tracing, the difference will become dramatic.

    Not to say you can't do all that with your CPU. Only that one is significantly slower than the other. From what I can glean, any reduction of quality in GPU render, is more of a product of settings, rather than the render engine. Speaking of which can oftentimes be a combination of the two.

    On the other hand, there are some equation that one is better at. In that case, it would depend on what is being rendered and to what end purpose.
     
  18. leginag

    leginag Notebook Enthusiast

    Reputations:
    0
    Messages:
    18
    Likes Received:
    0
    Trophy Points:
    5
    I am in a similar situation.

    I am looking for a graphics card for Photoshop and After Effects...as well as gaming.

    I have read that Kelper has lower performance than Fermi for OpenGL and such. But some people seem to say that getting the GT680M would be fine.
    Is it better to get an older Fermi chip..such as the 675M? or should I go for the kelpar 680M?
     
  19. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    I think the 680m has 1350 cuda-cores or something like that. The 675m has 384. And while both cards need a large power-supply to run at full speed, the kepler cards will scale down dynamically. So it's going to be more comfortable to work with.

    It's the 650/660m cards that are a bit below the 670m in raw performance. But then they also have half the power-draw (and the dynamic underclocking), so.. weigh preferences and requirements against each other.
     
  20. Karamazovmm

    Karamazovmm Overthinking? Always!

    Reputations:
    2,365
    Messages:
    9,422
    Likes Received:
    200
    Trophy Points:
    231
    actually the performance of the 680m on cuda and opengl is lesser than the 675m.

    If you are going to use opengl and not cuda, get a 7970m, its going to be a extremely better performance if you compare the 2.
     
  21. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    ...no, karamazov. No situation where that happens.
     
  22. Karamazovmm

    Karamazovmm Overthinking? Always!

    Reputations:
    2,365
    Messages:
    9,422
    Likes Received:
    200
    Trophy Points:
    231
    get benchies of the 560ti and the 670 and compare.
     
  23. leginag

    leginag Notebook Enthusiast

    Reputations:
    0
    Messages:
    18
    Likes Received:
    0
    Trophy Points:
    5
     
  24. Krane

    Krane Notebook Prophet

    Reputations:
    706
    Messages:
    4,653
    Likes Received:
    108
    Trophy Points:
    131
     
  25. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    No, transformation and surface processing on shaders and so on is pretty much the same nowadays. OpenGL is faster than directx in general, but usually only because it doesn't get a ton of includes along with every object you create. OpenGL in that sense is the surface brushes and object creation phase, a rasterizer with shader logic. CUDA is an attempt to formalize gpgpu applications, or running common "complex cpu" operations in parallel, that normal shader code isn't able to run. OpenCL is a better attempt to do that. But in the meantime, there are some implementations that only use CUDA.

    So you get this situation where CUDA over an external card is easier to work with on some programs, because that's what it was implemented for. Not because it's more efficient, or even that the graphics cards actually were more powerful. Eventually - and hopefully sooner than later - this isn't going to be an issue. But it still is with CS and so on, to some extent.
    Yeah, pretty much. Half the clock speed, twice the amount of stream processors, something like that.
    No, not really. Compatibility and power-scaling tilts in the 680m's favor, price and raw power tilts in the 7970m's favour, at least occasionally.

    What I mean by the 680m being more comfortable to work with is that when you use the external card on amd's setups, then it powers up fully, and keeps drawing full force as long as it's enabled. From the beginning of the program run until you close it, even if you're not doing anything. The kepler cards scale down dynamically to fit the actual usage. So you could end up with it running at 10%, be perfectly cool, and then you still have the grunt when you're running something heavy in bursts. That's extremely valuable when editing something away from a socket, as well as when you don't need an extra heater in the room, etc.

    There's also the point that there's still some way up to a desktop card. I mean, you're still buying a laptop here. So it's arguable how useful a scaled down desktop card really is in a laptop, when you don't get monstrously superior performance. Imo. I mean, either of those cards running at full tilt is going to be really warm in a laptop cabinet.

    Outside that the amd/radeon software looks like some 14 year old went ballistic with neon-stickers on a black plate. It's about that functional as well... ..I'm of course not biased at all, obviously. :p

    The question, always, is what you're really using the laptop for. Like I said, if you can get away with an amd/apu setup and the OpenCL acceleration that offers, we're talking less than half the price, almost no heat, and a lot of options with different chassis. Also, massively higher battery life, and on the move editing. If you can't do with that, then you have to make some sort of compromise.
     
  26. Krane

    Krane Notebook Prophet

    Reputations:
    706
    Messages:
    4,653
    Likes Received:
    108
    Trophy Points:
    131
    And what of optimization? Are not some program written specifically to make use of specific cards in mind? I believe that was the original intent of the Nvidia and Adobe collaboration. Not merely Nvidia, but specifically their top Quadro professional line. Only later down sized to the masses. How does all of this integrate with Open CL? Complimentary or ancillary?
     
  27. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,686
    Likes Received:
    131
    Trophy Points:
    81
    ..great, the board ate my post. But short version, nothing you can do in CUDA can't be done in OpenCL. It's not a paradigm shift between OpenCL and CUDA in that sense. But in practice it's a backport effort for some programs, like CS. Because you would very likely need to restructure some of the routines, and you might not have the wish to do that with a catalogue of officially supported devices not benefiting from it.

    But it's kind of in the cards that OpenCL is going to be a more common choice now since it's possible to scale over different devices and number of stream processors, etc.
     
  28. teun3sixty

    teun3sixty Newbie

    Reputations:
    0
    Messages:
    9
    Likes Received:
    0
    Trophy Points:
    5
    Thank you for all the replies dear sir, till now i noticed my thread got some attention.

    The 7970m in terms of opencl performance really outperforms even the higher end quadro cards. Opencl is the future, and that's why I kind put AMD aside for now due to the preffered Cuda solution in 3ds max and after effects. Premiere and photoshop use opencl but Real Time viewport results in After Effects is what's most important to me.

    Now I am not informed or could find a good thread if opencl is important for real time viewport performance, or opengl or whatever. I at first chose the 675m for its raw power for 3d rendering(when scenes get high poly), and for its cuda cores. You see, the 675m has a 256bit interface, compared to the gt650m 128 bit interface. That will increase the performance as much as an extra cuda core makes the difference. Thats why the 675m is just as fast as the 680m at cuda performance(i thought 1 fermi cudacore was 1/3 fermi). Cyberpowerpc and ibuypower have a 4gb 675m version, very good for big textured scenes(2gb is occasionally capped).

    You are absolutely right when it comes down to Editors not really knowing the general facts just because they do not run into performance problems them self. 3d particle explosions, with stock footage,multiple color corrections, 3d layers containing special effects, blur effects In the video i posted, matte paintings of huge map sizes, all sorts of animations =all in one composition=cpu seriously not handling it. I edited that one nicely on a gtx570 desktop but was hindered with an amd phenom II and 6gb ddr2 ram. Now that that desktop got stolen, and my workflow will include moving from place to place to edit my work, I am in need of a laptop. I currently have a 2630qm, gt 540m, 6gb ram. My laptop is capped at 8gb ram, but the 2630qm's limit is easily met with these heavy effect scenes(ram bottleneck
    kept in mind though)

    Not really for gaming even though i would welcome that, but the quadro cards are just way too expensive and at that point i would already buy a workstation desktop for that price. I am planning to earn serious money but can only currently invest in a laptop @1500$ max.

    If the live viewport performance would increase with a better opencl card, then without any questions asked i would go for the 7970m as this is on the level of the higher quadro mobile series. Nvidia is splitting their graphics solutions in 2 ways with quadro and geforce while suprisingly Radeon is clearly leading in raw opencl compute performance.

    Thank you for your time.
     
  29. leginag

    leginag Notebook Enthusiast

    Reputations:
    0
    Messages:
    18
    Likes Received:
    0
    Trophy Points:
    5
    I asked a lot of questions on forums as I also do a lot of After Effects and Photoshop work.
    The overall consensus is mixed comparing the 680M and the 7970M, however, the 680M has a premium price attached to it. Performance-wise, essentially both the cards are similar, with the 7970M standing out with OpenGL and such. nVdida's decision to split mainstream and enterprise graphics between Geforce and Quadro is a shame. The 680M has better power management and the 7970M has some Enduro problems, which look like they are getting fixed, going by the lastest driver beta update. But I was looking at a $300 price difference between the cards! And so I went with the 7970M and used the price difference to upgrade the processor and SSD of my system.

    I'm not sure if your post was asking about device recommendations, but I ended up ordering the following Clevo machine with the 7970M.
    i7-3720QM, 16GB RAM, 7970M 2Gb, 256Gb SSD, 750Gb HDD.

    16Gb RAM is a minimum requirement, as 8Gb RAM disappeared so fast with the system I am using now. I am considering upgrading to 32Gb RAM down the track. The 3720 has a ~5-20% performance increase over the 3630, and a SSD is a must nowadays. 256Gb seemed a good choice given the great advice I received about over provisioning an SSD.
     
  30. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    5,398
    Messages:
    12,692
    Likes Received:
    2,717
    Trophy Points:
    631
    teun3sixty,

    I think you're basically asking about what I wrote here:

    See:
    http://forum.notebookreview.com/har...des/695944-i7-3630-i7-3720-a.html#post8936594


    Hope it is applicable and that it helps.


    Executive overview: if you don't get a very specific NVidia gpu for After Effects; you are basically wasting your money (especially if you also use the other programs of the Adobe CS6 Suite that makes much better use of AMD GPU's.

    Good luck.