After spending much time reading through people's experiences in these forums, I've pretty much decided to go with an EVOC P870TM-1 with, among other things, dual GTX 1080's.
I'm wondering if anyone has had experience using these for GPU-based 3D rendering, such as Octane or Redshift renderers. My work requires me to be mobile often, which is why I'm investing in a laptop, but I'm also seduced by the prospect of taking advantage of rendering using dual GPU's in a mobile workstation. Any recommendations or experiences in this area would be appreciated![]()
I also use CPU-based apps like Cinema4D, which I think would benefit more from the 9900k. The CPU would have 16 PCI lanes, so each GPU would have access to 8x, I believe, meaning they would not be taking advantage of their full 16x capability. I would also want to disable SLI, as I don't intend to run games that support it and have heard it can have a negative affect on the renderers. Is that possible?
-
You can have a system with Quadro GPU's if your applications can get benefits of it, anyway this is the most powerful mobile system on the market.
Good luck.robochuck likes this. -
I was also looking for the best mobile workstation and will probably go for the HIDEvolution P870TM, I think it is a good choice.
Right now I have a P775DM3 and I am rather satisfied. Only point is that I needed manual fan control. For full-day workstation purpose, the stock fans were way too loud.
I am running Linux and have adopted some half-brewn fan control code on github ( https://github.com/davidrohr/clevo-indicator), but I assume you'll run windows. I would recommend to check the fan control software offered by obsidian, looked pretty good to me: http://forum.notebookreview.com/thr...-by-obsidian-pc.801464/page-367#post-10863873.
Finally, I am not sure if it is possible but you could go for 2080 instead of 1080. 2080 will not support SLI but for rendering that doesn't matther. There was some discussion here: http://forum.notebookreview.com/threads/clevo-2019.826781/page-30robochuck likes this. -
Meaker@Sager Company Representative
You'd have to speak with them as further heatsink work would be required to fit two.
-
If I were you, I would take one 1060 or 1070 for the main display/c4d viewport and one 1080 for rendering only. But you need to ask the guys from hidevolution whether the bios supports such a bundle. In the case of redshift sli is not a problem, but for octane Sli can be disabled in the display settings. As far as I know, sli bridge cable can not be installed at all, but it is better to clarify how the bios will behave on such a setup.
Days when the quadro was useful for CG were long gone
-
Meaker@Sager Company Representative
-
Thank you Dr. AMK, I did look into Quadro GPU's and apparently they don't offer much of an advantage for this kind of graphics work to justify their price compared to geforce cards. But just having the option is pretty amazing, as well as being able to upgrade in the future.Dr. AMK likes this.
-
Thanks for the suggestion, but as far as I know Cinema4D uses 100% CPU, so the GPUs would be for the external renderers/plugins. Some of them scale well with multiple cards which can significantly improve render times. I'll have to ask about the SLI bridge, I would likely leave it disabled.
-
Thanks for the link, wow dual 2080's would be nuts. I'm not sure the renderers take advantage of the new RTX features at this point, but I wonder if that can be an upgrade option in the future.
-
Yes, c4d native renderer is cpu based. When I said display/viewport I meant that first gpu will be busy drawing the scene in the viewport and some gpu memory will be partially loaded, which will inevitably reduce the benefits of this card and also lead to interface lags while ipr is running. Therefore, it is better to exclude the first video card when working with ipr, which is why it makes sense to take one gpu cheaper, since there will not be much benefit from high performance, and doing the final many hours of rendering on both video cards in a laptop is not a good idea. Cloud renderfarm will work much better for the final rendering with a large amount of samples, but for the lookdev on the go, this is a great option.
But if money is not a problem, then you can put two 1080, why not
robochuck likes this. -
Ahhhh... I get what you mean, thank you for explaining
That makes a lot of sense. Do you think there's an advantage to putting a 2080 in for the second card (if it's even an option) or is that just overkill in this situation?
-
It will be faster, yes. More cuda cores + faster memory.
But it’s hugely overpriced. If 1180 (or whatever they name it) will have a mobile version it will be a better option, since gpu renderers use only cuda cores for now. Before Redshift will release at least a beta for 3.0 with benefits from rt-cores (personaly I don’t think this implementation will ever happen) nvidia launch a brand new generation
robochuck likes this. -
Meaker@Sager Company Representative
I think Nvidia wont come out with a larger non RTX chip just yet and even if they do they likely wont take it beyond 2070 speeds.
Vernoux likes this. -
Donald@Paladin44 Retired
We can do that with a Special Request on the EVOC High Performance Systems P870TM-R. Just reach out to me to make those arrangements. Please feel free to email me at [email protected] or call me Toll Free at 1-888-666-3418 Extension 44 between 9:00 AM and 6:00 PM Pacific Time Monday through Friday to discuss any other questions you might have.robochuck likes this. -
Meaker@Sager Company Representative
You would not really target both for one task, it would be if you were executing a pair of renders with each as a target.robochuck likes this. -
OK, I see, I wasn't sure if scaling GPU's (x2 in this case) would speed up rendering times with GPU-based renderers.
-
That's great, thank you Donald
-
Meaker@Sager Company Representative
Not a single running one AFAIK. you could reach out to the software's community.
HIDevolution EVOC for 3D rendering
Discussion in 'Sager and Clevo' started by robochuck, Feb 28, 2019.