The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.

    Second W880CU with GTX480M review online! - this time from Anandtech

    Discussion in 'Sager and Clevo' started by alexnvidia, Jul 8, 2010.

  1. alexnvidia

    alexnvidia Notebook Deity

    Reputations:
    434
    Messages:
    1,386
    Likes Received:
    622
    Trophy Points:
    131
    if you guys arent happy with tomshardware's review on the 880cu, check out anandtech's review!

    Fermi Goes Mobile: AVADirect's Clevo W880CU with GTX 480M - AnandTech :: Your Source for Hardware Analysis and News

    happy reading and enjoy! :cool:

    Qouted from Anandtech, and brace yourself nvidia fans, it's not a happy ending.

    " Well, we keep saying it's the fastest mobile GPU available, and that's probably because it's the fastest mobile GPU available. How much faster? That's kind of a problem.

    While the 480M takes the lead in most of the games we tested—it downright tears past the competition in Far Cry 2 and DiRT 2—in Mass Effect 2 and Left 4 Dead 2 it was actually unable to best the Mobility Radeon HD 5870. It's only when 4xAA is applied at 1080p that the 480M is able to eke out a win against the 5870 in those titles (we only showed the 4xAA results for the 480M, but you can see the other results in our W860CU review), but the margin of victory is a small one. Of course, Mass Effect 2 doesn't need ultra high frame rates and Left 4 Dead 2 (like all Source engine games) has favored ATI hardware.

    Ultimately, that seems to be the pattern here. The wins the 480M produces are oftentimes with the 5870 nipping at its heels; even compared to the 14-in-dog-years GTX 285M it only offers a moderate improvement in gaming performance. What we essentially have are baby steps between top-end GPUs, particularly when we're running DX10 games running at reasonable settings. DX11 titles may be more favorable; DiRT 2 gives the 480M a 25% lead while STALKER is a dead heat; early indication are that Metro 2033 also favors NVIDIA, though we lack 5870 hardware to run those tests. You can see that DX11 mode is punishing in Metro, regardless. Our look at the desktop GTX 480 suggests that NVIDIA has more potent tesselation hardware. Will it ultimately matter, or will game developers target a lower class of hardware to appeal to a wider installation base? We'll have to wait for more DX11 titles to come out before we can say for certain.

    NVIDIA provided additional results in their reviewers' guide, which show the 480M leading the 5870 by closer to 30% on average. However, some of those are synthetic tests and often the scores aren't high enough to qualify as playable (i.e. Unigine at High with Normal tesselation scored 23.1 FPS compared to 17.3 on the 5870). Obviously, the benefit of the GTX 480M varies by game and by settings within that game. At a minimum, we feel games need to run at 30FPS to qualify as handling a resolution/setting combination effectively, and in many such situations the 480M only represents a moderate improvement over the previous 285M and the competing 5870. Is it faster? Yes. Is it a revolution? Unless the future DX11 games change things, we'd say no."
     
  2. AndrewKW

    AndrewKW Notebook Consultant

    Reputations:
    126
    Messages:
    231
    Likes Received:
    0
    Trophy Points:
    30
    That is not very positive review at all...
     
  3. Patrck_744

    Patrck_744 Burgers!

    Reputations:
    447
    Messages:
    1,201
    Likes Received:
    41
    Trophy Points:
    66
    I'm having second thoughts now about buying a 8850
     
  4. electrosoft

    electrosoft Perpetualist Matrixist

    Reputations:
    2,766
    Messages:
    4,116
    Likes Received:
    3,967
    Trophy Points:
    331
    There really hasn't been a major, "ohhh ahhhh," since Nvidia and the 280. This quote from the review is very telling:

    "Back when we took the ATI Mobility Radeon HD 5870 and NVIDIA GeForce GTX 285M and pit them against each other, the 5870's victory was met with some disappointment because it just wasn't the Hail Mary we had hoped for. Notebook graphics performance had been stagnating for so long with no competition at the top of the heap, allowing NVIDIA to refresh the G92 an absurd number of times, and yet when ATI finally decided to come out and play, the best they could do was beat the GTX 285M by about 10% on average. ATI didn't deliver a knockout blow; they just flicked NVIDIA behind the ear over and over again. Now with a cut down Fermi chip powering the GeForce GTX 480M, NVIDIA's response is to say "quit it!" and slap at ATI's hands."

    Performance between the 285M->5870->480M is too close for a dominant victory on the mobile front and many times it really comes down to application specific performance instead of a gross overall prison beating.

    At this point, it comes down to consumption, heat and costs IMHO.
     
  5. nobodyshero

    nobodyshero Notebook Speculator

    Reputations:
    511
    Messages:
    879
    Likes Received:
    0
    Trophy Points:
    30
    It just confirms what the more versed and nuanced users here already had tested and explored.... With how far along overclocking has gone along in the M17x area with the 5870x2 I doubt 480SLI is going to make a dramatic dent in the dual card setup area either since the dual card setup is being introduced so late (late August I last heard).

    I've already moved past interest in the 480, I am deeply interested in the Nvidia's rumored blueprint for the next coming line though, seems very promising. 2011 should be a pretty good year with sandybridge coming too.
     
  6. KipCoo

    KipCoo Notebook Evangelist

    Reputations:
    99
    Messages:
    591
    Likes Received:
    0
    Trophy Points:
    30
    Just makes no sense to buy a laptop with the GTX480m considering it commands a $600 premium over the 5870m that's just as fast.
     
  7. ashveratu

    ashveratu Notebook Evangelist

    Reputations:
    318
    Messages:
    470
    Likes Received:
    0
    Trophy Points:
    30
    When I look at how good a product is, my main considerations are cost vs performace. From what I see, the 5870 still comes out on top here.
     
  8. Blacky

    Blacky Notebook Prophet

    Reputations:
    2,049
    Messages:
    5,356
    Likes Received:
    1,040
    Trophy Points:
    331
    I am patiently waiting for 28nm mobile GPUs.
     
  9. Daniel Hahn

    Daniel Hahn Notebook Evangelist

    Reputations:
    146
    Messages:
    664
    Likes Received:
    0
    Trophy Points:
    30
    I still think the GTX 480M would score considerably better once people overvolt it to 0.9V and OC it to something like 550/1100/1200. Temp wise this should still work and should offer much better performance than the Mobility Radeon HD 5870 can, even if overclocked.

    But then again: why not just skip the GTX 480M and wait for the next ATI card or the GF104 based Nvidia mobile solutions... In my opinion Clevo should have skipped the GF100 mobile solutions, they had to invest a lot of money into this, in the end basically for nothing. Unless we will only see 100W TDP high end mobile cards in the future, but this would clearly be the wrong direction.
     
  10. alexnvidia

    alexnvidia Notebook Deity

    Reputations:
    434
    Messages:
    1,386
    Likes Received:
    622
    Trophy Points:
    131
    100W TDP might be going towards the wrong direction, but if you think about it, if they (nvidia/ati) have designed the card properly (efficiently) to work at 100W TDP but having performance like desktop ati 5850, i certainly wont mind that.

    alienware M17x, which currently has 2x CFMR5870 has TDP that exceed 100W TDP (~75Wx2), and also GTX280M SLI (75Wx2). they both perform superbly in terms of graphics rendering power. however, given that both SLI and CF have multi-card related issues (refer to M17x forum for more info), single card/gpu solution is always the better option.

    we have seen that although GTX480M is a 100W TDP card, a properly designed heatsink and fan can easily keep the heat in check. check out W880CU's gpu temperature at full load, they are barely hitting 68C. even MR5870 can easily top 70C. the problem with GTX480M is that, the gpu is basically a die harvested chip, meaning, it is the full fledged desktop GTX480 (3 billion transistors!!) but with defective parts disabled in order to get desktop 465 and then heavily downclocked (probably hand picked to work with low voltage) to get mobile GTX480M. that's why the performance/watt ratio is a total joke.

    i guess my point is, 100W TDP isnt really a big issue here if the gpu can justify its performance. now if ati(or nvidia) can come up with a ~100W TDP card with performance like/close to desktop ati 5850, isnt that a lot more sensible?
     
  11. Charles P. Jefferies

    Charles P. Jefferies Lead Moderator Super Moderator

    Reputations:
    22,339
    Messages:
    36,639
    Likes Received:
    5,082
    Trophy Points:
    931
    I will be reviewing the NP8850 shortly, I should have it next week.
     
  12. Phinagle

    Phinagle Notebook Prophet

    Reputations:
    2,521
    Messages:
    4,392
    Likes Received:
    1
    Trophy Points:
    106
    In and of itself a die shrink to 28nm wont solve Fermi's manufacturing, power, and cost shortcomings....in fact it could potentially make things worse. Not all 28nm fabs are created equally, and the options available to Nvidia from TSMC are very different from what ATI will be using at GloFo.

    Nvidia could go for TSMC's cheaper 28nm SiON/Poly fab which would reduce cost and allow for a quicker transition but they'd get stomped on by the much better 28nm HKMG fab that ATI will get out of GloFo. Or if Nvidia decides to wait for 28nm HKMG there's the differences between TSMC's gate-last approach versus GloFo's gate-first. Gate-last does have advantages over gate-first but it's also said to have more manufacturing restrictions and produce a larger die which are issues that Nvidia already gets wrong all on their own (result: high cost and low yields).

    So yeah, 28nm Fermi will be tastier than 40nm Fermi but the die shrink won't necessarily make it magically delicious against 28nm ATI. Nvidia's best hope for that happening, and regaining their lead on ATI, is if whatever Hecatoncheires architecture is sucks hard.

    I concur. :mask:
     
  13. chunkdside

    chunkdside Notebook Enthusiast

    Reputations:
    0
    Messages:
    31
    Likes Received:
    0
    Trophy Points:
    15

    I plan on doing exactly what you said about overvolting and overclocking!!! First I need to get my laptop back from repairs and wait for a new version of nibitor to overvolt this card...
     
  14. Blacky

    Blacky Notebook Prophet

    Reputations:
    2,049
    Messages:
    5,356
    Likes Received:
    1,040
    Trophy Points:
    331
    I completely agree, but I also think that both Nvidia and ATI will work more on their architecture's performance per watt. And as a result, when the 28nm GPUs are going to come out, the GPUs will not only be smaller but also more power efficient in terms of architecture. Even when it comes to the desktop market, a bigger die gpu like in the case of the GF100 is more expensive to produce, harder to cool and needs a larger PSU which all in all makes it less attractive for consumers. If you could get the same performance with less power consumption and transistors (at the same nm tech) I am sure both consumers and companies will win, and both Nvidia and ATI know that and will try to go into that direction.
    I know the GF100 is not a very good chip now, but I am inclined to believe that the architetcure will improve in time and the GF104 is proof of that.

    Somehow I doubt AMD's Fusion architecture will be able to get to the same level of performance as Intel, but it would be awesome if it did!
     
  15. Phinagle

    Phinagle Notebook Prophet

    Reputations:
    2,521
    Messages:
    4,392
    Likes Received:
    1
    Trophy Points:
    106
    28nm SiON/Poly is the same as 28nm HKMG only like how a pound of flesh is the same as a pound of gold. The best tweak to Fermi that Nvidia can make is using the right 28nm fab process.

    Coincidentally a good example of what High-K Metal Gates can do for you. ;)

    Llano might not take the performance crown but like most people I don't need the fastest CPU out there and AMD does offer competitive performance at the same price points as Intel. What's more is Llano's APU offers more potential for gaming than Intel's IGP ever can.

    Plus now AMD's got the "John the Baptist" of PhysX, Manju Hegde (co-founder of Aegia), to push Fusion for them.
     
  16. Blacky

    Blacky Notebook Prophet

    Reputations:
    2,049
    Messages:
    5,356
    Likes Received:
    1,040
    Trophy Points:
    331
    Unfortunately I don't know that much about the actual production process to discuss the matter more in-depth. I assume you are right but I also assume that Nvidia is knows very well that the production process can affect their yields and performance and they will do what it takes to get things done right.

    I agree that AMD is rather competitive taking the same price point, but they still need to improve some of their features, like their version of Turbo Boost for instance. Also when it comes to CPU manufacturing, Intel is doing a great job.
     
  17. Quicklite

    Quicklite Notebook Deity

    Reputations:
    158
    Messages:
    1,576
    Likes Received:
    16
    Trophy Points:
    56
    Is it me, or does something about 480M just makes terribly no sense? When 480M was announced, I almost got rocked off my seat reading the hardware spec. It was almost too good to be true.


    ►With 3bn transistors, it nearly quadrupled the count of 285M (0.754bn).
    ►Its 352 shader more than doubles that from 285M's 128.
    ►2Gb GDDR5 memory is unseen in the mobile market.

    After announcement, the speculation on the heated up, on price, performance, etc. It understandable that the beast may rightfully command some extra cash - I suppose, if the performance is so much better, the premium could be justified. Looking at the raw spec, and architecture, you'd expect it not only to ace through tessellation, but also normal games at least twice the performance, right?

    Then the reviews came, outlining 18-30% performance gain in most tests, not really groundbreaking, but a step away from their infamous rebranding dealings - while more worryingly showing the card to demand monstrous $600 over similarly performing MR 5870.

    I'm sure while nV's done some job halving GTX 465's TDP, 480M still nevertheless have full 3bn transistors on it, as alexnvidia said - had they manufactured 352 shaders with no binning, things probably would be for the better. Nevertheless, maybe market presence is what they wanted - not actually to sale the card. When 8800M launched, it had absolutely no direct rival for some time, mixed with respectably substantial gain in then current also DX10 games - that probably explained how they were able to charge so much, and how so many ppl actually bought it. Things are different now, with MR 5870 readily available for considerable less, while offering no major performance hit - think nV's strat is a hit, and a miss.

    The G92B variants have been around so long, its drivers have been maturing for 3 years or longer, maybe that may offset the relative weak performance of 480M atm. On the otherhand, its probably too easy to blame the drivers cos binning aside, the hardware is quite similar to desktop GTX 465 - even 480 for that matter, which doesn't have much problem here.
     
  18. nobodyshero

    nobodyshero Notebook Speculator

    Reputations:
    511
    Messages:
    879
    Likes Received:
    0
    Trophy Points:
    30
    There's an old saying about that...I think we all know how it goes.


    Also *bows* to Phinagle as always with excellent info, repped. I hear alot about Nvidia drivers but I have my own little SLI laptop project going and the newest drivers really arent great either...I think Nvidia and ATI both are missing the target with drivers as stands.
     
  19. fzhfzh

    fzhfzh Notebook Deity

    Reputations:
    289
    Messages:
    1,588
    Likes Received:
    0
    Trophy Points:
    55
    Well, TSMC is getting ready the 28nm HKMG high k first, metal gate last process as well. The gate-last approach is arguably superior to gate-first approach, the cons is the limitations in chip design and a more complex process, but offers better yields, threshold voltage stability, and better performance. Even IBM and global foundry, advocates of gate-first approach was considering to shift over to gate-last approach.

    It's not true that gate-last has less yield, the cost is higher yes because of the more complex process, but the yield is definitely higher than the gate-first approach for 28nm process.

    HKMG with gate last approach by intel ever since the Penryn is the reason why Intel is pwning AMD for these few years.

    Edit: Oh, gate-last approach also can withstand much higher temperature compared to gate-first approach as well, that might help nvidia.
     
  20. Blacky

    Blacky Notebook Prophet

    Reputations:
    2,049
    Messages:
    5,356
    Likes Received:
    1,040
    Trophy Points:
    331
    That's very informative. Thanx. +rep.
     
  21. Phinagle

    Phinagle Notebook Prophet

    Reputations:
    2,521
    Messages:
    4,392
    Likes Received:
    1
    Trophy Points:
    106
    A 28nm process that produces a larger die will cut fewer chips out of the same amount of wafer. You can only cut so many 1" squares out of a 10"x10" piece of paper....increasing the size of each square to 1.1" will yield less squares.

    Gate-last definitely has it's advantages, and the chances are good that it'll be the direction all the foundries head to for 22nm and under, but the cons associated with it exaggerate issues that Nvidia already deals with and that's not going to help them gain any ground from ATI.


    Yup. Which is why some people have high hopes for Llano combining high-K metal gates with silicon on insulator tech.

    Nvidia will be plenty happy to hear they can make their GPUs run even hotter. :biggrin:
     
  22. JimmyC

    JimmyC Notebook Consultant

    Reputations:
    14
    Messages:
    245
    Likes Received:
    7
    Trophy Points:
    31
    Any word on what the next flagship Mobility card from ATI will be and\or when?
     
  23. Phinagle

    Phinagle Notebook Prophet

    Reputations:
    2,521
    Messages:
    4,392
    Likes Received:
    1
    Trophy Points:
    106
  24. JimmyC

    JimmyC Notebook Consultant

    Reputations:
    14
    Messages:
    245
    Likes Received:
    7
    Trophy Points:
    31
    Thank ye kindly.