The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    Mobility X700. A Geforce 4 Ti 4600 on crack?

    Discussion in 'Gaming (Software and Graphics Cards)' started by el_superhombre, Dec 10, 2006.

  1. el_superhombre

    el_superhombre Notebook Consultant

    Reputations:
    75
    Messages:
    117
    Likes Received:
    0
    Trophy Points:
    30
    Just out of interest what do you think the desktop equivilent of the the Mobility X700 is?

    I have come to my own conclusions that it sits squarely between the perfomance of the Desktop Radeon 9700 (non-pro) and the old Geforce 4 Ti 4600.

    But it is more like a Geforce 4 Ti 4600 on crack.

    What do you think?

    Here is a great site to compare the cards:

    http://www.gpureview.com/show_cards.php?card1=100&card2=135
     
  2. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,441
    Messages:
    58,200
    Likes Received:
    17,913
    Trophy Points:
    931
    With its improved shaders over the 9700 and DX9 functionality (which the 4x00 series lacked) its more on par with a 9800np.
     
  3. Charles P. Jefferies

    Charles P. Jefferies Lead Moderator Super Moderator

    Reputations:
    22,339
    Messages:
    36,639
    Likes Received:
    5,090
    Trophy Points:
    931
    My overclocked MR X700 256MB has nearly identical 3DMark05 scores when compared to my desktop's Radeon 9800XT 256MB. It is also about as fast as a desktop X700PRO - at least when comparing 3DMarks.

    The 9800 is stronger in older games (Far Cry - games similar to that), and is more of a beast overall thanks to its 256-bit memory bus and astronomical memory bandwidth (relatively). However, in newer games, the X700 is stronger (FEAR . . .) because it is better at handling complex lighting.
     
  4. ltcommander_data

    ltcommander_data Notebook Deity

    Reputations:
    408
    Messages:
    1,398
    Likes Received:
    0
    Trophy Points:
    55
    I think a standard Mobility X700 would be similar to a desktop X700 non-Pro or a desktop X1300Pro in the current generation. It varies with clock speeds though since unlike desktop, laptop specs can vary.
     
  5. el_superhombre

    el_superhombre Notebook Consultant

    Reputations:
    75
    Messages:
    117
    Likes Received:
    0
    Trophy Points:
    30
    A 9800 wow, there goes my Ti 4600 theory... I just thought that the 9800's memory bandwith and 256bit interface would put it a mile ahead of the X700.
     
  6. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,441
    Messages:
    58,200
    Likes Received:
    17,913
    Trophy Points:
    931
    ATI made the x700 quite efficiant with the bandwidth it had also the cards at that time were not too memory limited.

    Oh and you should not say memory bandwidth AND 256bit bus, a 500mhz 128bit system technically (if timings and everything are the same) perform the same as 250mhz 256bit.

    The x700 will outperform the x1300 by quite a bit in most games even though it lacks SM3.
     
  7. TwilightVampire

    TwilightVampire Notebook Deity

    Reputations:
    362
    Messages:
    1,376
    Likes Received:
    0
    Trophy Points:
    55
    I've been known to break 3DMark scores of 9800's with my x700. I can also match performance in older games (unless you bring AA into the mix) and of course beat it in newer games.

    The built in test for ATI Tray Tool places my x700 higher than an x700 pro when I overclock.

    I had a GeForce 4400 overclocked to 4600 standards. Believe me, going back to that card and trying to compare it to the x700 in my lappy and I cry at the performance difference. Its unimaginable.
     
  8. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,441
    Messages:
    58,200
    Likes Received:
    17,913
    Trophy Points:
    931
  9. el_superhombre

    el_superhombre Notebook Consultant

    Reputations:
    75
    Messages:
    117
    Likes Received:
    0
    Trophy Points:
    30
    Yeah I owned a Ti4600 like four years ago, the card would bottom out in GTA Vice City with AA turned on. Something the X700 just simply does not do. LOL I still remember all the trash I talked when my freind got a 9700 Pro and thought it could beat my 4600...

    Boy did all those 3Dmarks of his make me eat my words... :(
     
  10. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,441
    Messages:
    58,200
    Likes Received:
    17,913
    Trophy Points:
    931
    Well the 9700 was a true generation leap if there ever was one. I just found it funny that the 8500 was nipping at the heals of the 4200 as ATI kept working on the drivers.
     
  11. el_superhombre

    el_superhombre Notebook Consultant

    Reputations:
    75
    Messages:
    117
    Likes Received:
    0
    Trophy Points:
    30
    Yes :). I suppose the new 9700 is the Geforce 8800, being that it is Direct X 10 capable it should serve who every buys it well for like the next four to five years. Just like 9700 did for Direcx X 9.
     
  12. ltcommander_data

    ltcommander_data Notebook Deity

    Reputations:
    408
    Messages:
    1,398
    Likes Received:
    0
    Trophy Points:
    55
    Drivers definitely improved the 8500's performance, but I think the other part may be the fact that the 8500 supported SM1.4 while the 4200 only supported SM1.3. There are effects that SM1.4 could do in one pass that SM1.3 needs 2 or more to do, which improves speed and saves memory bandwidth by reducing writes and rebacks. I'm not sure how many games actually opted to create a SM1.4 render path since SM1.1 is the lowest common denominator for DX8, but the Steam engine certainly does to good effect.

    3DMark03 also allows for a separate SM1.4 and SM1.1 path which meant a 8500 can perform to it's full potential more efficiently than a GeForce4. nVidia was of course up at arms over this accusing Futuremark of being biased and out of touch by implementing so much SM1.4 instead of going directly to SM2.0 or using SM1.4 but not SM1.3. Not that it would have mattered anyways since the GeForceFX series performs so bad in SM2.0 anyways it would have hardly improved things. SM1.3 just adds the new effects of SM1.4 to SM1.1, but it doesn't include the speed improvements which are architectural so adding a SM1.3 path wouldn't have improved performance for the GeForce4 anyways. In the end, nVidia famously went to driver cheats in 3DMark03, which completely destroyed any credibility in their complaints. Those were interesting times indeed.
     
  13. el_superhombre

    el_superhombre Notebook Consultant

    Reputations:
    75
    Messages:
    117
    Likes Received:
    0
    Trophy Points:
    30
    Very interesting times... I remember that was right around the time the 9700 Pro was kicking nVidia's ass and boy were they desperate. With their flagship Ti4600 anihilated by the 9700 Pro they came up with that monstrosity the Geforce FX. Ridiculiously high specs, louder than a leafblower with that outlandish cooler and still it got slapped side to side in the benchmarks by the 9700 Pro. It was such a bungeled effort from nVidia I am surprised they were not ruined by that fiasco.


    [​IMG]
     
  14. ltcommander_data

    ltcommander_data Notebook Deity

    Reputations:
    408
    Messages:
    1,398
    Likes Received:
    0
    Trophy Points:
    55
    I guess that's the advantage of having deep pockets. The other effect of deep pockets though may have been to convince game developers to hold off making DX9 games as long as possible. The GeForceFX performs fine in DX8.1 so they avoid the problem, and it buys them time to get the GeForce 6xxx out which went directly to DX9.0c and SM3.0. Pushing for a direct jump from DX8.1 to DX9.0c seems to have been exactly what happened in Splinter Cell if I'm not mistaken, which for some very wierd reason had a huge gap without SM2.0 when it was released. If I still remember the timeline correctly, the poor GeForceFX DX9 performance issue only really exploded with HL2, by which time nVidia had already moved on to the GeForce 6xxx series.
     
  15. el_superhombre

    el_superhombre Notebook Consultant

    Reputations:
    75
    Messages:
    117
    Likes Received:
    0
    Trophy Points:
    30
    nVidia are just lucky that they got it right with the 6 series, other wise Ati might have been the only choice we would have now. But that wouldn't be such a bad thing :) lol nah competition is good.
     
  16. Meaker@Sager

    Meaker@Sager Company Representative

    Reputations:
    9,441
    Messages:
    58,200
    Likes Received:
    17,913
    Trophy Points:
    931
    Graphics cards are not developed so quickly, Nvidia will have created the FX chip with certain bounds of clock speeds, they start getting it ready when all of a sudden ATI release the 9700pro, knowing the chips as they stood could not compete too well Nvidia put harsher speed binning in place, took massive coolers and overclocked the chips as much as possible to try and get them to the same level.

    Now however high end cards are mostly dual slot cards, just look at the difference between the 9700pro heatsink and the x800 heatsink then the x1800xt heatsink :/
     
  17. ltcommander_data

    ltcommander_data Notebook Deity

    Reputations:
    408
    Messages:
    1,398
    Likes Received:
    0
    Trophy Points:
    55
    Well in the case of the GeForceFX, nVidia decided to launch on an untried process node along with an untried memory type GDDR2. Both of which had problems. Not only that, supposedly their design was based on an old spec for DX9, which Microsoft changed after their falling out with the XBox. Add to that the fact that nVidia didn't even stay true to the old spec, but went off on their own tangent to "exceed" it, it's no wonder the GeForceFX was so out of it.
     
  18. el_superhombre

    el_superhombre Notebook Consultant

    Reputations:
    75
    Messages:
    117
    Likes Received:
    0
    Trophy Points:
    30
    Yeah I think that the FX 5800's cooler got such a scathing because it really was the first of it's kind. No one had really ever seen something so different before (although there was that OTES system on a Ti4200) on a new card. Really the FX 5800 despite failing in almost every catagory did give birth to a revolutionary cooling system.