The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    the relation between FPS and the bus used by your GPU

    Discussion in 'Gaming (Software and Graphics Cards)' started by miro_gt, Jan 26, 2013.

  1. miro_gt

    miro_gt Notebook Deity

    Reputations:
    433
    Messages:
    1,748
    Likes Received:
    4
    Trophy Points:
    56
    alright so here's the deal:

    I recently connected another GPU to my nVidia T61 laptop in my sig, a desktop Radeon HD 7750 video card via lenovo advanced dock at x1 PCIe bus. The 3DMark06 scores are great (over 11k), showing about 4 to 5 times better performance compared to my 140m.

    However, the FPS in two games that I tried are lower than what I get with my internal nVidia 140m GPU at equal game video settings. By slower I mean quite a difference of about 50% (Blacklight Retribution, and Team Fortress 2)

    now the internal nVidia GPU is using x16 link and the external 7750 is using x1 link, as my GPU-z screenshot shows:

    http://oi50.tinypic.com/e6t9at.jpg

    The 7750 here is severely limited by the link it uses.

    - would that mean that say a lenovo T420s with its nVidia 4200m using x16 link would give about the same FPS as a lenovo T430s with its nVidia 5200m using x8 link ?

    Obviously the 4200m is twice slower than the 5200m, but is using twice faster link as connected.

    [​IMG] [​IMG]

    ... I think in my case the x1 link is absolutely useless for video purposes, unless the video card uses drivers that can do compression to send information through the link, such as with the nVidia optimus drivers.
     
  2. tijo

    tijo Sacred Blame

    Reputations:
    7,588
    Messages:
    10,023
    Likes Received:
    1,077
    Trophy Points:
    581
    I don't think it will make that much of a difference at those link rate. I remember anandtech did a comparison between PCI-E 3.0 interfaces at various link rates and they needed a rather beefy GPU to see a difference. I don't think the NVS5200 will be much crippled by it's x8 link.

    EDIT: Here's said test in games: http://www.anandtech.com/show/5458/the-radeon-hd-7970-reprise-pcie-bandwidth-overclocking-and-msaa. Do note that this was on PCI-E 3.0 meaning that even a 3.0 x2 link is equivalent to a 2.0 x4 link. You can see a slight decrease, but if a single desktop 7970 isn't hampered too much at 3.0 x4, a NVS5200 at 2.0 x8 should be alright.
     
  3. bignaz

    bignaz Notebook Consultant

    Reputations:
    56
    Messages:
    155
    Likes Received:
    0
    Trophy Points:
    30
    With those cards it wont make any diff. at all on pcie 2.0 Remember 2.0 has 2X the bandwidth of 1.0 and even on a x8 1.0 slot those cards just dont have enough power to make it a bottle neck.
     
  4. hankaaron57

    hankaaron57 Go BIG or go HOME

    Reputations:
    534
    Messages:
    1,642
    Likes Received:
    2
    Trophy Points:
    56
    Holy crap my GPU-z is outdated lol.

    You really noticed 50% drop in TF2 frames? Do you have a specific figure?
     
  5. Qing Dao

    Qing Dao Notebook Deity

    Reputations:
    1,600
    Messages:
    1,771
    Likes Received:
    305
    Trophy Points:
    101
    No, not at all. The PCI-E bandwidth for both of those cards is far more than sufficient. After a certain point, more PCI-E bandwidth doesn't do anything. Both of those are fairly low end video cards, and don't require much. The NVS 4200 has 32 times the PCI-E bandwidth, and the NVS 5200 has 16 times the PCI-E bandwidth of your external card. The bandwidth for both of these cards is well taken care of, regardless of one having more than the other.

    The problem is that your external card is using a x1 1.1 link. That is incredibly slow and will bottleneck ANY video card. Even when PCI-E came out, that was too slow except for the most bottom end video cards, and we are talking 8 years later now.
     
  6. Zymphad

    Zymphad Zymphad

    Reputations:
    2,321
    Messages:
    4,165
    Likes Received:
    355
    Trophy Points:
    151
    I remember a 5870M wasn't able to exceed bandwidth of PCIE 2.1 @ 8x.
     
  7. miro_gt

    miro_gt Notebook Deity

    Reputations:
    433
    Messages:
    1,748
    Likes Received:
    4
    Trophy Points:
    56
    with the 7750 at x1 -> FPS is about 20 on average, dropping to low 10s but hardly going over 25.
    with the 140m at x16 -> FPS is about 30 on average, dropping to 25 the lowest and going to 35 or so at highest, but is more consistent at ~30 compared to the 7750 at ~20 FPS, i.e. smoother animation with the 140m.

    What I can't explain is how the 7750 at x1 scored over 11k in 3DMark06 despite the obvious problem. The FPS during the test there were well in the 40s.

    I noticed the 5400m is using x16 link in the lenovo T430. If PCIe 2.0 x8 is plenty then why did nVidia put x16 link for that card (thought lenovo put x16 link as well due to nVidia specs) ? .... marketing ?
     
  8. hackness

    hackness Notebook Virtuoso

    Reputations:
    1,237
    Messages:
    2,367
    Likes Received:
    427
    Trophy Points:
    101
    Actually NVIDIA has nothing to do with this in the laptop market, NVIDIA only sells chips to the OEMs. And then those OEMs who bought the chips make the daughter boards and put the chips in, the type of daughter board made is usually MXM structure (Asus makes weird daughter boards), the type of PCIE link is related to the MXM slot that the OEMs choosed to implement.
     
  9. Qing Dao

    Qing Dao Notebook Deity

    Reputations:
    1,600
    Messages:
    1,771
    Likes Received:
    305
    Trophy Points:
    101
    Try running something a bit newer, like 3DMark11, and see if the discrepancy still exists. Also it is hard to predict the behavior of the PCI-E bus being the bottleneck. In some things it is worse than others. The AMD card completely destroys the two Nvidia cards from a performance perspective. 3DMark06 is hardly a graphics test anymore and it is not a game; it is usually bottlenecked by the rest of the system. And you can't be sure that in every benchmark the PCI-E bus will be bottlenecking the external card the same way and amount.

    It isn't marketing. Implementing a faster PCI-E bus is "free" so to speak. Everything is already in place, it just depends on if the engineers designing the motherboard want to wire it all up or not for whatever reason. And in this case it really doesn't matter.
     
  10. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,431
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    Optimus has to send the frame over the PCI-E bus for the integrated chip to actually output to the display, this does not happen in non-switchable setups, the card just puts it out on one of the display outputs.

    Only attaching an 8x gen 2 bus to weaker cards is quite smart, it will simplify routing, make PCB costs cheaper and will have 0 impact on performance.

    I think the 4200M is maybe around 5% of the power you would need to properly start pushing the 16x Gen2 link, the NVS5200 is at about 9-10% of what you would need. Certainly for a single card, once you add SLI/Xfire in to the mix they raise the bandwidth requirements.
     
  11. miro_gt

    miro_gt Notebook Deity

    Reputations:
    433
    Messages:
    1,748
    Likes Received:
    4
    Trophy Points:
    56
    I dont think that is quite correct - the chip comes on a chip carrier board which is also put by the manufacturer, and this is what OEMs buy and put on their motherboards one way or another. Very few laptops come with MXM board nowadays, but that will be an additional board where the chip carrier board with the actual chip on top sits on.

    here's picture - what OEMs buy is the chip that sits on its carrier (the green board), and then OEMs put all that onto the MXM board (blue)
    http://www.3dnews.ru/_imgdata/img/2012/01/12/622780/s700g7a-inside-4.jpg

    now what bus the chip uses is already hard wired onto the chip / chip carrier, so from there on what the OEM does is separate. Here's GPU-z screenshot of a GPU that uses x16 link but the OEM chose to connect it at x8 speed:
    http://farm9.staticflickr.com/8491/8300354497_01d1005c50.jpg


    oh trust me I'm sure that's the bottleneck. Apparently I'm not the only one who figured that out:
    http://forum.notebookreview.com/e-g...851-diy-egpu-experiences-134.html#post6582485

    I cant run 3DMark11, I'm on XP 32bit atm.

    so they could safely do x2 link for the 4200m with the same result, or x4 for the 5200m, yet both are connected over much higher links. So not as smart implementation IMO, or is there something else ..

    but optimus compression works quite good
     
  12. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,431
    Messages:
    58,194
    Likes Received:
    17,902
    Trophy Points:
    931
    The simplification of creating a smaller bus drops off each time you cut lanes and you don't want everyone going crazy how they don't have many lanes connected.

    In the MSI GX60 with the 7970M has 8x Gen2 lanes connected.