The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    Diff between DDR n GDDR

    Discussion in 'Gaming (Software and Graphics Cards)' started by just_geek, Jan 11, 2010.

  1. just_geek

    just_geek Notebook Guru

    Reputations:
    0
    Messages:
    72
    Likes Received:
    0
    Trophy Points:
    15
    Hi all,wanna know what's the difference between DDR3, GDDR3, GDDR5 on any gpu?? does a G in front of the DDR makes a big difference in performance of the card??
    thanks all
     
  2. stefanp67

    stefanp67 Notebook Consultant

    Reputations:
    238
    Messages:
    264
    Likes Received:
    0
    Trophy Points:
    30
    If i got i correctly DDR3 and GDDR3 are completely different both electrically and physically. GDDR5 is twice as fast as GDDR3 but how it differs from GDDR3 i have no idea. The bandwidth formula for all interfaces are:

    (buswidth/8) x (effective clock) = DDR3/GDDR3/GDDR5 bandwith

    Typical real clocks/effective clocks of DDR3, GDDR3, GDDR5 are:

    DDR3 = 667/1333 MHz real clock/effective clock (HD4650M DDR3)
    GDDR3 = 950/1900 MHz real clock/effective clock (Nvidia GTX260M)
    GDDR5 = 1000/4000 MHz real clock/effective clock (Radeon HD5870M)

    The interface bandwidths for a 128-bit wide bus with typical effective clocks would be:

    DDR3 = (128/8)*1333 = 21.3 GB/s
    GDDR3 = (128/8)*1900 = 30.4 GB/s
    GDDR5 = (128/8)*4000 = 64.0 GB/s

    So the difference between DDR3 and GDDR3 seems to be that GDDR3 generally runs at higher clocks and with a wider bus (256-bit vs 128-bit). The difference between GDDR3 and GDDR5 is that GDDR5 is twice as fast.

    Please correct me if i have misunderstood anything.
     
  3. superj

    superj Notebook Geek

    Reputations:
    6
    Messages:
    82
    Likes Received:
    1
    Trophy Points:
    16
    GDDR5 is twice as fast over the same bus width, but is seems like the bus width is often reduced on ATI's GDDR5 cards. The high end ATI cards run 256-bit wide bus while Nvidia runs 512 GDDR3. The advantage of GDDR5 is lower heat and voltages and the abililty to save some costs by going with a narrower bus to achieve similar throughput.
    I bet once Nvidia released their DX11 parts and competition wicks up a notch there will be a 512-bit GDDR5 ati card, as the current ATI options are a bit starved for memory bandwidth.
     
  4. osomphane

    osomphane Notebook Evangelist

    Reputations:
    81
    Messages:
    426
    Likes Received:
    0
    Trophy Points:
    30
    Wikipedia ( http://en.wikipedia.org/wiki/GDDR3):
    Graphics Double Data Rate 3 is a graphics card-specific memory technology, designed by ATI Technologies[1] with the collaboration of JEDEC.
    It has much the same technological base as DDR2, but the power and heat dispersal requirements have been reduced somewhat, allowing for higher performance memory modules, and simplified cooling systems. Unlike the DDR2 used on graphics cards, GDDR3 is unrelated to the JEDEC DDR3 specification. This memory uses internal terminators, enabling it to better handle certain graphics demands. To improve bandwidth, GDDR3 memory transfers 4 bits of data per pin in 2 clock cycles.
    The GDDR3 Interface transfers two 32 bit wide data words per clock cycle from the I/O pins. Corresponding to the 4n-pre fetch a single write or read access consists of a 128 bit wide, one-clock-cycle data transfer at the internal memory core and four corresponding 32 bit wide, one-half-clock-cycle data transfers at the I/O Pins. Single-ended unidirectional Read and Write Data strobes are transmitted simultaneously with Read and Write data respectively in order to capture data properly at the receivers of both the Graphics SDRAM and the controller. Data strobes are organized per byte of the 32 bit wide interface.

    http://en.wikipedia.org/wiki/GDDR5
    GDDR5 (Graphics Double Data Rate, version 5) is a type of graphics card memory. It conforms to the standards which were set out in the GDDR5 specification by the JEDEC. GDDR5 is the successor to GDDR4. Unlike its predecessors, it has two parallel DQ links which provide doubled I/O throughput when compared to GDDR4. GDDR5 SGRAM is a high performance dynamic random-access memory designed for applications requiring high bandwidth. GDDR5 SGRAM uses a 8n prefetch architecture and DDR interface to achieve high performance operation and can be configured to operate in x32 mode or x16 (clamshell) mode which is detected during device initialization. The GDDR5 interface transfers two 32 bit wide data words per WCK clock cycle to/from the I/O pins. Corresponding to the 8n prefetch, a single write or read access consists of a 256 bit wide two CK clock cycle data transfer at the internal memory core and eight corresponding 32 bit wide one-half WCK clock cycle data transfers at the I/O pins.
     
  5. Alexrose1uk

    Alexrose1uk Music, Media, Game

    Reputations:
    616
    Messages:
    2,324
    Likes Received:
    13
    Trophy Points:
    56
    Nvidia isnt exactly known for common use of 512bit memory bus, only Fermi and the GTX285/280 have used that level, more commonly Nvidia use 384bit or less.

    The 8800GTX was 384bit IIRC, and most cards recently have been less, including the GT200 and G92 based chips.
    The reason ATI used 128bit bus width, is because 1000Mhz GDDR5 is equivalent to 256bit bus GDDR3 1000Mhz, it's cheaper, the card is simpler to build, and there is no performance loss. Some of ATIs desktop cards with GDDR5 and 256bit bus have a far higher memory bandwidth than most recent Nvidia cards.

    256bit bus width and GDDR5 is completely adequate right now, GDDR5 and 512bit bus widths would deliver a huge amount more bandwidth. The last ATI card which used 512bit bus was the 2900XT IIRC. With the higher memory speeds these days, they no longer need to, and it results in a much thinner, cheaper to produce PCB (reduced layering requirements).

    Nvidia are now starting to use GDDR5 for the same reason. They couldnt use it before because the memory controller had to be redesigned, whereas ATIs 4 series natively supported the specification, allowing them to field the 4850 with GDDR3 and the 4870 with the new GDDR5. Its unfortunate they chose to go 128bit for the 57 series, but at the end of the day it was to reduce cost and complexity, given a card that performs within 10% of the last gen, whilst in a lower product family, they were onto a win! The desktop 58 series still use a 256bit bus width, and GDDR5, which results in them having amongst the highest memory bandwidth of any consumer card on the market right now (Fermi doesnt count as its not out yet), and the 6 series ATI will be a new architecture, so who knows what they'll use there as they're not out til late 2010 (ATI 2,3,4 and 5 series are all based round the same underlying architecture).
     
  6. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    A lot of cheaper and mainstream GPUs use the same memory chips as your system - DDR2 and DDR3. These differ from higher end cards that use GDDR3 and GDDR5.