The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    HIDevolution EVOC for 3D rendering

    Discussion in 'Sager and Clevo' started by robochuck, Feb 28, 2019.

  1. robochuck

    robochuck Newbie

    Reputations:
    0
    Messages:
    7
    Likes Received:
    3
    Trophy Points:
    6
    After spending much time reading through people's experiences in these forums, I've pretty much decided to go with an EVOC P870TM-1 with, among other things, dual GTX 1080's.

    I'm wondering if anyone has had experience using these for GPU-based 3D rendering, such as Octane or Redshift renderers. My work requires me to be mobile often, which is why I'm investing in a laptop, but I'm also seduced by the prospect of taking advantage of rendering using dual GPU's in a mobile workstation. Any recommendations or experiences in this area would be appreciated :)

    I also use CPU-based apps like Cinema4D, which I think would benefit more from the 9900k. The CPU would have 16 PCI lanes, so each GPU would have access to 8x, I believe, meaning they would not be taking advantage of their full 16x capability. I would also want to disable SLI, as I don't intend to run games that support it and have heard it can have a negative affect on the renderers. Is that possible?
     
    Vernoux and Dr. AMK like this.
  2. Dr. AMK

    Dr. AMK Living with Hope

    Reputations:
    3,961
    Messages:
    2,182
    Likes Received:
    4,654
    Trophy Points:
    281
    You can have a system with Quadro GPU's if your applications can get benefits of it, anyway this is the most powerful mobile system on the market.
    Good luck.
     
    robochuck likes this.
  3. qon

    qon Notebook Enthusiast

    Reputations:
    27
    Messages:
    28
    Likes Received:
    20
    Trophy Points:
    6
    I was also looking for the best mobile workstation and will probably go for the HIDEvolution P870TM, I think it is a good choice.

    Right now I have a P775DM3 and I am rather satisfied. Only point is that I needed manual fan control. For full-day workstation purpose, the stock fans were way too loud.
    I am running Linux and have adopted some half-brewn fan control code on github ( https://github.com/davidrohr/clevo-indicator), but I assume you'll run windows. I would recommend to check the fan control software offered by obsidian, looked pretty good to me: http://forum.notebookreview.com/thr...-by-obsidian-pc.801464/page-367#post-10863873.

    Finally, I am not sure if it is possible but you could go for 2080 instead of 1080. 2080 will not support SLI but for rendering that doesn't matther. There was some discussion here: http://forum.notebookreview.com/threads/clevo-2019.826781/page-30
     
    robochuck likes this.
  4. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,909
    Trophy Points:
    931
    You'd have to speak with them as further heatsink work would be required to fit two.
     
  5. Vernoux

    Vernoux Notebook Enthusiast

    Reputations:
    2
    Messages:
    16
    Likes Received:
    12
    Trophy Points:
    6
    If I were you, I would take one 1060 or 1070 for the main display/c4d viewport and one 1080 for rendering only. But you need to ask the guys from hidevolution whether the bios supports such a bundle. In the case of redshift sli is not a problem, but for octane Sli can be disabled in the display settings. As far as I know, sli bridge cable can not be installed at all, but it is better to clarify how the bios will behave on such a setup.

    Days when the quadro was useful for CG were long gone :)
     
    robochuck and Dr. AMK like this.
  6. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,909
    Trophy Points:
    931
    Depends on the app still, some don't benefit.
     
    robochuck likes this.
  7. robochuck

    robochuck Newbie

    Reputations:
    0
    Messages:
    7
    Likes Received:
    3
    Trophy Points:
    6
    Thank you Dr. AMK, I did look into Quadro GPU's and apparently they don't offer much of an advantage for this kind of graphics work to justify their price compared to geforce cards. But just having the option is pretty amazing, as well as being able to upgrade in the future.
     
    Dr. AMK likes this.
  8. robochuck

    robochuck Newbie

    Reputations:
    0
    Messages:
    7
    Likes Received:
    3
    Trophy Points:
    6
    Thanks for the suggestion, but as far as I know Cinema4D uses 100% CPU, so the GPUs would be for the external renderers/plugins. Some of them scale well with multiple cards which can significantly improve render times. I'll have to ask about the SLI bridge, I would likely leave it disabled.
     
  9. robochuck

    robochuck Newbie

    Reputations:
    0
    Messages:
    7
    Likes Received:
    3
    Trophy Points:
    6
    Thanks for the link, wow dual 2080's would be nuts. I'm not sure the renderers take advantage of the new RTX features at this point, but I wonder if that can be an upgrade option in the future.
     
  10. Vernoux

    Vernoux Notebook Enthusiast

    Reputations:
    2
    Messages:
    16
    Likes Received:
    12
    Trophy Points:
    6
    Yes, c4d native renderer is cpu based. When I said display/viewport I meant that first gpu will be busy drawing the scene in the viewport and some gpu memory will be partially loaded, which will inevitably reduce the benefits of this card and also lead to interface lags while ipr is running. Therefore, it is better to exclude the first video card when working with ipr, which is why it makes sense to take one gpu cheaper, since there will not be much benefit from high performance, and doing the final many hours of rendering on both video cards in a laptop is not a good idea. Cloud renderfarm will work much better for the final rendering with a large amount of samples, but for the lookdev on the go, this is a great option.
    But if money is not a problem, then you can put two 1080, why not ;)
     
    robochuck likes this.
  11. robochuck

    robochuck Newbie

    Reputations:
    0
    Messages:
    7
    Likes Received:
    3
    Trophy Points:
    6
    Ahhhh... I get what you mean, thank you for explaining :vbthumbsup: That makes a lot of sense. Do you think there's an advantage to putting a 2080 in for the second card (if it's even an option) or is that just overkill in this situation?
     
  12. Vernoux

    Vernoux Notebook Enthusiast

    Reputations:
    2
    Messages:
    16
    Likes Received:
    12
    Trophy Points:
    6
    It will be faster, yes. More cuda cores + faster memory.
    But it’s hugely overpriced. If 1180 (or whatever they name it) will have a mobile version it will be a better option, since gpu renderers use only cuda cores for now. Before Redshift will release at least a beta for 3.0 with benefits from rt-cores (personaly I don’t think this implementation will ever happen) nvidia launch a brand new generation :)
     
    robochuck likes this.
  13. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,909
    Trophy Points:
    931
    I think Nvidia wont come out with a larger non RTX chip just yet and even if they do they likely wont take it beyond 2070 speeds.
     
    Vernoux likes this.
  14. Donald@Paladin44

    Donald@Paladin44 Retired

    Reputations:
    13,989
    Messages:
    9,257
    Likes Received:
    5,843
    Trophy Points:
    681
    We can do that with a Special Request on the EVOC High Performance Systems P870TM-R. Just reach out to me to make those arrangements. Please feel free to email me at [email protected] or call me Toll Free at 1-888-666-3418 Extension 44 between 9:00 AM and 6:00 PM Pacific Time Monday through Friday to discuss any other questions you might have.
     
    robochuck likes this.
  15. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,909
    Trophy Points:
    931
    You would not really target both for one task, it would be if you were executing a pair of renders with each as a target.
     
    robochuck likes this.
  16. robochuck

    robochuck Newbie

    Reputations:
    0
    Messages:
    7
    Likes Received:
    3
    Trophy Points:
    6
    OK, I see, I wasn't sure if scaling GPU's (x2 in this case) would speed up rendering times with GPU-based renderers.
     
  17. robochuck

    robochuck Newbie

    Reputations:
    0
    Messages:
    7
    Likes Received:
    3
    Trophy Points:
    6
    That's great, thank you Donald
     
  18. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,436
    Messages:
    58,194
    Likes Received:
    17,909
    Trophy Points:
    931
    Not a single running one AFAIK. you could reach out to the software's community.