The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    Peformance difference between PCI x16 2.0 versus PCI x16 1.0?

    Discussion in 'Gaming (Software and Graphics Cards)' started by noxxle99, Jan 29, 2008.

  1. noxxle99

    noxxle99 Notebook Deity

    Reputations:
    34
    Messages:
    922
    Likes Received:
    1
    Trophy Points:
    31
  2. noxxle99

    noxxle99 Notebook Deity

    Reputations:
    34
    Messages:
    922
    Likes Received:
    1
    Trophy Points:
    31
    bump...............
     
  3. drakoniac

    drakoniac Notebook Consultant

    Reputations:
    45
    Messages:
    183
    Likes Received:
    0
    Trophy Points:
    30
    I doubt there would be any gigantic gains ( maybe a few FPS? maybe? ), PCI-E 1.0 should be enough bandwidth for that card...when you get to like a 3870 and 8800GT or GTS then PCI-E 2.0 would be beneficial but not entirely necessary.
     
  4. StormEffect

    StormEffect Lazer. *pew pew*

    Reputations:
    613
    Messages:
    2,278
    Likes Received:
    0
    Trophy Points:
    55
    I do not think PCI-E 2.0 will provide much better performance when comparing two current cards (such as a 3870) of the same type but with different interfaces (i.e. one is PCI-E 1.0 and one is PCI-E 2.0)

    The standards introduced with 2.0 will pay off with cards that are better designed for the standard. Current cards don't seem to really play with the possibilities yet.

    Even the 3870 X2 utilizes PCI-E 1.1, due to the interface/bridge chip. It doesn't seem like ATI would've gone with 1.1 if it would've created any significant performance decrease.
     
  5. TheGreatGrapeApe

    TheGreatGrapeApe Notebook Evangelist

    Reputations:
    322
    Messages:
    668
    Likes Received:
    0
    Trophy Points:
    30
    Actually the 3870 is PCIe 2.0, the internal bridge chip is PCIe 1.1 because they didn't bother with a PCIe 2.0 bridge with such short wire lengths and not chipset jumps.

    Anywhoo, for such a card (HD3850) very limited use for PCIe 2.0 and the primary benifit would be improved Hypermemory speed, but overall as mentioned before, the difference would amount to small percentage differences that may or may not translate to a whole frame or two. The smaller the memory size and the greater the need to swap textures likely the more the card will benifit from higher speeds, but overall the difference would be very minor.

    Here's a pretty good look by the guys at Tom's Hardware;
    http://www.tomshardware.com/2008/01/23/crossfire_meets_pci_express/page5.html

    The fps difference is noticeable because the fps number is huge so 2% is a big deal, but 2% on 30fps is only 0.6 frames per second so not even a frame and of course at 60fps it would be just over 1fps improvement.

    What would've been nice is if they had looked at the min fps which is likely where you would see chokepoints expressed the most.
     
  6. unknown555525

    unknown555525 rawr

    Reputations:
    451
    Messages:
    1,630
    Likes Received:
    0
    Trophy Points:
    55
    I don't think people quite understand why current gen cards will NOT benefit at ALL from PCIe2. Once the info is loaded onto the video card, it stays there waiting to be rendered for the duration that it's processing the 3D app.

    During actual gameplay, most of what the game needs is already loaded to the video card, including the huge files such as texture data. This means that the ram is only used when streaming new data to the video card, and since most is already loaded when you have a card with a large onboard ram size, such as 512MB on the video card, then the ram is required to tranfer data to the video card even less. The only thing left is the CPU to GPU relations, which is actually becoming less and less as new technologies are offloading more data directly to the GPU and cutting out the CPU middleman, meaning less bandwidth is required since the CPU doesn't have to send as much info to the GPU as it previously did.

    As long as the video card has enough onboard space, and can store the data it needs ahead of time, then the PCIe's bandwidth is becoming less and less important. technically, a high end video card such as even the 9800GX2 could run fantastic on even PCI-e 1x as long as they use the 2GB onboard the card to good use and load everything to the card ahead of time.
     
  7. ziggo0

    ziggo0 Notebook Consultant

    Reputations:
    60
    Messages:
    291
    Likes Received:
    0
    Trophy Points:
    30
    Look at it this way. If your going to drop the green on a 8800, might as well go 2.0 than 1.0?
     
  8. unknown555525

    unknown555525 rawr

    Reputations:
    451
    Messages:
    1,630
    Likes Received:
    0
    Trophy Points:
    55
    there wont be any difference, at this point. if you were to pay more for the PCIe 2.0 version of the 8800, then you've just fallen victim of a marketing scheme. Since PCIe cards right now are still nowhere near close to using up existing PCIe 16x 1.1 slot's bandwidth, then there should be zero difference.