The Notebook Review forums were hosted by TechTarget, who shut down them down on January 31, 2022. This static read-only archive was pulled by NBR forum users between January 20 and January 31, 2022, in an effort to make sure that the valuable technical information that had been posted on the forums is preserved. For current discussions, many NBR forum users moved over to NotebookTalk.net after the shutdown.
Problems? See this thread at archive.org.
← Previous pageNext page →

    ATi Mobility HD 6000 series Roadmap

    Discussion in 'Gaming (Software and Graphics Cards)' started by Arioch, Jun 10, 2010.

  1. ziddy123

    ziddy123 Notebook Virtuoso

    Reputations:
    954
    Messages:
    2,805
    Likes Received:
    1
    Trophy Points:
    0
    Anyways, what I find most pathetic is. Nvidia is comparing their latest that haven't even been released to November 2009 tech. And despite numerous revisions and model name changes, it's still note arguably better.
    - All Nvidia did was provide products which could compete with ATi's offerings from November (since the mobile version are just smaller versions of desktop, architecturally unchanged).
    - All their GPUs from GTX 480M down to 460M and possibly the rest cost more to manufacture. It costs Nvidia substantially more to manufacture 460M than it does for ATi to make HD5870M. That cost is not generously paid by Nvidia, that cost is thrown back at the consumer!
    - ATi HD5xxx series from November 2009 is still more efficient per transistor, per watt than GF104 or GF106!

    The GF104 has about 1.95 Billion Transistors. The Cypress (HD5870) has 2.15 Billion Transistors. Yet the Cypress has twice the performance of the GF104.
    The GF104 has 200 million less transistors, yet it's larger than the Cypress and costs 20% more to manufacture!
    For Nvidia to provide 5-10% more performance than Cypress, they need an additional BILLION transistors to compete


    Now I'm certainly bashing Nvidia here. But don't start yelling at me, yell at Nvidia. I just believe in competitively priced, efficient hardware. Why should customers have to pay for Nvidia's financial mistakes both at the store and at home? (Electric Bill).

    My guess is you are referring to DARPA? Yeah whatever...

    Nvidia Tesla, CUDA is hardly impressive to me in light of recent events.
    - Especially when in recent news showed that ATi's measely $600 HD5970 outperforms the $10,000 Tesla...

    With AMD's Fusion tech rolling out, you have any doubts that AMD will provide a Accellerated Processing Unit comprised of ATi Stream and AMD Opteron that destroy anything Nvidia can come up with?

    Nvidia CANNOT by law make x86, they do not have the license. AMD and Intel would burn them in court if they tried. But AMD can provide Stream Processors (which in some cases annihilates Nvidia CUDA) in combination with x86 architecture.

    BTW the fastest Super-Computer right now is comprised of AMD CPUs and IBM Cell Processors.

    And to mention, Intel will be watching all of this no doubt. Good luck to Nvidia, but don't see much success, lots of wishful thinking.
     
  2. anexanhume

    anexanhume Notebook Evangelist

    Reputations:
    212
    Messages:
    587
    Likes Received:
    2
    Trophy Points:
    31
    No, it isn't. But it's also an economy of scale. The more you have in common with the other chips rolling off the line, the less novel tooling you require and lower the cost to you.

    I've not heard too much about dopings varying wildly or too many varieties of silicon being used (other than strained for high performance). While transistor designs may vary, they're still bound by the library that their fab is offering them. Their dominant logic may differ (domino versus strict complimentary, etc.), but I'm still not aware of wide variances. Perhaps there's more choices for manufacturers than I give them credit for.
     
  3. Botsu

    Botsu Notebook Evangelist

    Reputations:
    105
    Messages:
    624
    Likes Received:
    0
    Trophy Points:
    30
    wat

    1/ Does this mean a 5870 should have twice the performance of a GTX460, cause that's hardly the case.

    2/ Source ? :] now I guess you're talking about GF100 vs 5870.

    The rest of your post makes sense but these 2 points you should develop.
     
  4. Blacky

    Blacky Notebook Prophet

    Reputations:
    2,049
    Messages:
    5,356
    Likes Received:
    1,040
    Trophy Points:
    331
    Yeah, I was referring to DARPA, but there are other projects that i've read about in the past year. I remember one was in finance, for calculating risks.

    As for ATI outperforming, I would not rule that out but you need to state a source for such claims.

    Has this been posted?

    Alleged ATI Radeon HD 6870 3DMark Vantage Benchmark leaked by VR-Zone.com
     
  5. mobius1aic

    mobius1aic Notebook Deity NBR Reviewer

    Reputations:
    240
    Messages:
    957
    Likes Received:
    0
    Trophy Points:
    30
    I saw that on the Gamespot's PC hardware forum. That bandwidth is awesome (200 GB/s! :eek: 'bout bloody time though).

    ATi has been well ahead of Nvidia in terms of performance/transistor for quite a while now, but I'm pretty sure much of the extra logic in Nvidia GPUs is for non-graphics related functionality which means the chip suffers from extra power consumption due do to it...... Larger chips and heat output also means thermal management is more important, and it's more likely the Nvidia chips will die earlier than ATi's, especially without adequate cooling devices, which further drive up cost. One last thing to note is that while games tend to favor Nvidia products in so many games, benchmarks like 3DMarkVantage tell a different story. The Radeon 5870 beats the GTX 480 outright in DX10 graphics performance w/o PhysX. While that somewhat puts the GTX 480 out some, it's at least purely a graphical benchmark. However I think it's some evidence that either AMD needs to get it's act together with drivers, or Nvidia just has too many hands in the developer side of things. It's still an interesting look at the performance/watt aspect of the cards. While, yes it's not real world or real gameplay I think it still reflects well on ATi and their ability to design chips that scream performance from as few transistors as possible. A game like Mass Effect 2 shows very similar performance, with a small favor towards the GTX 480 vs the Radeon 5870, and similar performance between the GTX 470 vs the Radeon 5850.

    Most of all, I'm interesting in how the medium end products fair, as I'm pining to replace the 5570 in my mostly used desktop with a 57xx or comparable 6xxx series card. However I'm also wondering how the medium end GTS/GTX 4xxs fair, and it's possible I may go with one of them instead to give myself some rudimentary PhysX capability.
     
  6. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    Let's stay focused on the topic of the 6000 series.

    There's plenty of opportunities to discuss Nvidia technology elsewhere.

    Hmm, I wonder what Mobility 6870 will look like.

    It's safe to assume, that it will come from the desktop 6770? Then I'll hope for desktop 5850 levels of performance.

    Mobile GPU power is close to reaching a level which I've only dreamed of us seeing.
     
  7. crash

    crash NBR Assassin

    Reputations:
    2,221
    Messages:
    5,540
    Likes Received:
    13
    Trophy Points:
    206
    Yes, let's all get back on track. Attempts to drive the thread OT will result in infractions.
     
  8. Hrithan2020

    Hrithan2020 Notebook Geek

    Reputations:
    10
    Messages:
    87
    Likes Received:
    0
    Trophy Points:
    15
    The biggest problem with mobile gpu's is the restriction on power consumption (unlike in desktops, where with each new generation, more and more power hungry cards can be released, admittedly with v.good performance increase also).

    So, I doubt whether a desktop 5850 performance can be reached, considering that the manufacturing technology is still 40 nm.

    However, now that the GTX 480 M which consumes over 100 W has been released, I'd think the possibilities of a newer radeon card (with approx 100 W power consumption), beating the current GTX 480 M by a good margin is quite high.. Come on ATI!!! (oops, AMD ;) )
     
  9. Blacky

    Blacky Notebook Prophet

    Reputations:
    2,049
    Messages:
    5,356
    Likes Received:
    1,040
    Trophy Points:
    331
  10. ziddy123

    ziddy123 Notebook Virtuoso

    Reputations:
    954
    Messages:
    2,805
    Likes Received:
    1
    Trophy Points:
    0
    p25000 on Vantage from a single card is scary fast. What surprised me was that ATi resolved their issue with tessellation. I wonder how they did this. Did they redesign the tessellation engine or did they just put two instead of one they have on HD6xxx. That Unigine (arguably Nvidia financed), is higher than that of a GTX 480.
     
  11. granyte

    granyte ATI+AMD -> DAAMIT

    Reputations:
    357
    Messages:
    2,346
    Likes Received:
    0
    Trophy Points:
    55
    ati has tellselation since befor nvidia and it was just never used so they could not know it would fail so they likely only have a telselation engine V2 with just fixes and nothing new
     
  12. anexanhume

    anexanhume Notebook Evangelist

    Reputations:
    212
    Messages:
    587
    Likes Received:
    2
    Trophy Points:
    31
  13. Blacky

    Blacky Notebook Prophet

    Reputations:
    2,049
    Messages:
    5,356
    Likes Received:
    1,040
    Trophy Points:
    331
    I guess that's why they are releasing Crysis 2 in March 2011, Crysis is finally losing the battle against ATI and Nvidia :D. But they have stood their ground for 3 years, as the first Crysis was released in 2007.

    Never played it though.
     
  14. Trottel

    Trottel Notebook Virtuoso

    Reputations:
    828
    Messages:
    2,303
    Likes Received:
    0
    Trophy Points:
    0
    Don't get your hopes up about the laptop version though, as it won't be a 6870 but a renamed mid-range card.
     
  15. anexanhume

    anexanhume Notebook Evangelist

    Reputations:
    212
    Messages:
    587
    Likes Received:
    2
    Trophy Points:
    31
    As long as CF top-end mobile cards are pretty close to top-end desktop cards as they are now, I think we'll be fine. ;)
     
  16. granyte

    granyte ATI+AMD -> DAAMIT

    Reputations:
    357
    Messages:
    2,346
    Likes Received:
    0
    Trophy Points:
    55
    it might be that card to the 48xx serie was using desktop 4800 cores and with 550 clock the 4870 mob was just 75 mhz behind some desktop 4850 and was often oced to that point

    or they could release a new card that compete in the moster cathegorie of mobile ~100wat with a 68xx core barely underlocked and rename it 6970mob now imagine a clevo frankeinstein with 2 such card in CFX it could scare even many performance desktop and the price would end quite scary to
     
  17. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    All fair points.

    All I want is for the Mob. 6870 to have 1120 or 1440 shaders. Any less will be quite a disappointment.
     
  18. Blacky

    Blacky Notebook Prophet

    Reputations:
    2,049
    Messages:
    5,356
    Likes Received:
    1,040
    Trophy Points:
    331
    @Kevin

    1120-1440 shaders with no die-shrink ? Only if they go 100W TDP.

    We are all assuming that the above benches are for a 6870 which has the same TDP as the current 5870. But if ATI increased the TDP so it would gain that extra performance then there is little hope for a major gain in performance in the mobile sector.

    Only after we see the exact specifications of the 6870 or whatever card was benched in these leaked posts will we be able to judge.

    All that these benches tell us now is that Nvidia is really screwed in the desktop market.
     
  19. Phinagle

    Phinagle Notebook Prophet

    Reputations:
    2,521
    Messages:
    4,392
    Likes Received:
    1
    Trophy Points:
    106
    Nothing wrong with a 100W TDP if you bring the performance to go with it. :wink:
     
  20. sean473

    sean473 Notebook Prophet

    Reputations:
    613
    Messages:
    6,705
    Likes Received:
    0
    Trophy Points:
    0
  21. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
    Definitely wouldn't count them out in the desktop market. Their GTX 460 (Gen 2 Fermi) is certainly doing a number on AMD's HD 5830/5850, especially when you SLI them (even taking down the much more expensive HD 5870 Crossfire).
     
  22. Phinagle

    Phinagle Notebook Prophet

    Reputations:
    2,521
    Messages:
    4,392
    Likes Received:
    1
    Trophy Points:
    106
    I know it's been posted elsewhere on NBR but feel it should be added here too.

    ATI Southern Islands codenames spotted in Catalyst 10.8 by VR-Zone.com

    Some speculation going that the ",NI" suggests Southern Islands is actually Northern Island....and that Gemini might suggest a dual-GPU card for notebooks.

    Gemini might also have something to do with Hybrid Crossfire with Llano.
     
  23. nikeseven

    nikeseven Notebook Deity

    Reputations:
    259
    Messages:
    786
    Likes Received:
    0
    Trophy Points:
    30
    Any news whether they will still be branded ATI or will they be AMD? I know that AMD annouced its dropping the ATI name, but I don't know if it will be by the 6000 series release
     
  24. crazycanuk

    crazycanuk Notebook Virtuoso

    Reputations:
    1,354
    Messages:
    2,705
    Likes Received:
    3
    Trophy Points:
    56
  25. ViciousXUSMC

    ViciousXUSMC Master Viking NBR Reviewer

    Reputations:
    11,461
    Messages:
    16,824
    Likes Received:
    76
    Trophy Points:
    466

    Why would you compare a 460 to a 5870? you wouldnt the 480 be the card that compares to the 5870, so then why say SLI 460 competes with Xfire 5870's?

    I would need proof to believe those words, and no a few titles that scale better in SLI than CF does not count it has to be consistent to be a true statement as there are always a few titles that work better on one card brand compared to the other, especially those titles "made for nvidia" endorsed.


    Also lets not forget the cost/poweruse/heat comparisons of the current ATI/Nvidia cards or the ATI Eyefinity technology vs Nvidia 3D tech.
     
  26. ryzeki

    ryzeki Super Moderator Super Moderator

    Reputations:
    6,552
    Messages:
    6,410
    Likes Received:
    4,087
    Trophy Points:
    431
    The only proof of this I have seen is on a site that has questionable results. Every other site I have seen the CF HD5870 kicks the 460 sli no questions asked.

    Even in said site where the CF looses, their numbers don't match with previous results.

    The 460 is a great card, but not that great. Infact, I have seen it batling most against a single 5830 and only in some instances catching to the 5850. the 5870 clearly defeats it.
     
  27. SacredDreams

    SacredDreams Notebook Evangelist

    Reputations:
    178
    Messages:
    461
    Likes Received:
    0
    Trophy Points:
    30
    well im not sure if the Alienware M15x might be able to support the new 6800 series. i do personally hope they do. hmm :p
     
  28. Blacky

    Blacky Notebook Prophet

    Reputations:
    2,049
    Messages:
    5,356
    Likes Received:
    1,040
    Trophy Points:
    331
  29. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    I forget... is the issue that it maxes at 65W?
     
  30. nobodyshero

    nobodyshero Notebook Speculator

    Reputations:
    511
    Messages:
    879
    Likes Received:
    0
    Trophy Points:
    30
    I'm personally not seeing it. All the Nvidia fanboys claim the 5870 is real world 100w which works in the M15x so the 6800 could be feasible in the M15x.
     
  31. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    Don't be like that. The Mob. 5870 is somewhere between 60 and 70W.
     
  32. nobodyshero

    nobodyshero Notebook Speculator

    Reputations:
    511
    Messages:
    879
    Likes Received:
    0
    Trophy Points:
    30
    Well I'm just being honest, I think AW might be deceptive on what the max TDP is in there, nobody quite knows it yet and handles the 5870 fine so far.
     
  33. crash

    crash NBR Assassin

    Reputations:
    2,221
    Messages:
    5,540
    Likes Received:
    13
    Trophy Points:
    206
    As promised, infractions have been handed out for off-topic flaming and posts have been deleted. Please try and stay on topic, it isn't that hard.
     
  34. Phinagle

    Phinagle Notebook Prophet

    Reputations:
    2,521
    Messages:
    4,392
    Likes Received:
    1
    Trophy Points:
    106
  35. crazycanuk

    crazycanuk Notebook Virtuoso

    Reputations:
    1,354
    Messages:
    2,705
    Likes Received:
    3
    Trophy Points:
    56
  36. ggcvnjhg

    ggcvnjhg Notebook Evangelist

    Reputations:
    37
    Messages:
    616
    Likes Received:
    0
    Trophy Points:
    30
    The 460 line is actually VERY cool running and low power. Nothing like the rest of it's brothers.
     
  37. Phinagle

    Phinagle Notebook Prophet

    Reputations:
    2,521
    Messages:
    4,392
    Likes Received:
    1
    Trophy Points:
    106
    The articles I linked aren't about the benchmarks of the desktop HD6800 or NI design...they're about the release date, TDP, and estimated performance (relative to the 5000 series) for the Barts XT desktop GPU (possible HD6770)

    Barts could potentially be the base GPU for mobile Blackcomb.
     
  38. sgogeta4

    sgogeta4 Notebook Nobel Laureate

    Reputations:
    2,389
    Messages:
    10,552
    Likes Received:
    7
    Trophy Points:
    456
  39. JohnnyFlash

    JohnnyFlash Notebook Virtuoso

    Reputations:
    372
    Messages:
    2,489
    Likes Received:
    11
    Trophy Points:
    56
  40. Blacky

    Blacky Notebook Prophet

    Reputations:
    2,049
    Messages:
    5,356
    Likes Received:
    1,040
    Trophy Points:
    331
    The TDP is higher than I expected. Might mean that ATI will go 100W TDP when they go mobile (just speculation). I don't mind 100W TDP as long as you can properly underclock/undervolt it to save power when on battery.
     
  41. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    I've long speculated, that ATI will take advantage of the 100W ceiling, now that NV has raised it.

    At least I hope they do. I want them to transition one of the higher-end chips, some 1280 or 1440 shader monster.
     
  42. CarlosGFK

    CarlosGFK Notebook Evangelist

    Reputations:
    82
    Messages:
    325
    Likes Received:
    0
    Trophy Points:
    30
    Right with you on that, if possible.
     
  43. anexanhume

    anexanhume Notebook Evangelist

    Reputations:
    212
    Messages:
    587
    Likes Received:
    2
    Trophy Points:
    31
    That would be nice. I'd feel less need to get a CF/SLI setup for lasting performance.
     
  44. Kevin

    Kevin Egregious

    Reputations:
    3,289
    Messages:
    10,780
    Likes Received:
    1,782
    Trophy Points:
    581
    I guess we will soon find out, whether Clevo has totally abandoned the W870CU.
    Mobile GPU power will soon (~12 months) reach the point, where SLI/CF monstrosities are unnecessary.

    Heck, depending on where TDP goes, they might become unfeasible for the notebook manufacturers.
     
  45. Blacky

    Blacky Notebook Prophet

    Reputations:
    2,049
    Messages:
    5,356
    Likes Received:
    1,040
    Trophy Points:
    331
    I am still patiently waiting for the 28nm tech :D. From the looks of it, I might even skip the whole SB platform and dive straight into Ivy Bridge (22nm).
     
  46. Phinagle

    Phinagle Notebook Prophet

    Reputations:
    2,521
    Messages:
    4,392
    Likes Received:
    1
    Trophy Points:
    106
    One interesting rumor going around the Beyond3D and Semi|Accurate forums is that, for the 6000 series, AMD may have dropped from a 5-way ALU to a 4-way ALU, and decreased the number of stream processors per SIMD from 80 to 64. I'll readily admit I don't fully understand the concept behind a change like that but the basics of what I get is that since the current design often enough can't use all 5 VLIW units anyway, eliminating one (either the 1 special function/t-unit or 1 of the 4 general units) would increase efficiency and reduce die size without causing a major hit in performance. The benefit from this is to then use the savings in power and die size to add more SIMD, TMU, ROP, wider memory bus, or whatever uncore tweaks, to more than make up for the loss in stream processors.

    So for example take the current Cypress XT/HD5870 that has 20 SIMD arrays of 80 ALU (for a total of 1600 SPU), 80 TMU, and 32 ROP....by going to a 4way ALU 20 SIMD arrays of 64 ALU would become 1280 SPU but the TMU and ROP count would stay the same. The resulting GPU would have a die size and power rating somewhere between Juniper (HD5770/Mob. HD5800) and Cypress.

    The speculation is that the Barts XT/HD6770 could be that 1280 SPU, 80 TMU, 32 ROP GPU, and that it's performance would be around the level of a desktop HD5830 or HD5850.


    These rumors originate from a Chinese site called chiphell.com and are discussed in length on the Beyond3D Forums. The argument against this rumor is that a change like this would not be considered a "minor" tweak to Evergreen. ;)
     
  47. anexanhume

    anexanhume Notebook Evangelist

    Reputations:
    212
    Messages:
    587
    Likes Received:
    2
    Trophy Points:
    31
    Would it also not be discrediting to the rumor the fact that if it was inefficient, it wouldn't have occurred in the first place? I have to imagine the folks at ATI understand workloads placed on them by games by now and would be aware of the lack of return on an overkill ALU.
     
  48. Phinagle

    Phinagle Notebook Prophet

    Reputations:
    2,521
    Messages:
    4,392
    Likes Received:
    1
    Trophy Points:
    106
    They would be aware of the problem but that doesn't necessarily mean they would have addressed the issue by now.....especially if the time, effort and redesign required to cut out the partially used ALU offers less return than adding more SIMD blocks or doing a die shrink.

    But being that they're stuck at 40nm again, and probably already having compacted all the wasted space out of the node that they possible could, AMD might now be looking back at old inefficiencies that were easier to ignore before.


    Again though it's just a rumor....I don't make them up, I just spread them. :wink:
     
  49. anexanhume

    anexanhume Notebook Evangelist

    Reputations:
    212
    Messages:
    587
    Likes Received:
    2
    Trophy Points:
    31
    I understand all that. What I'm wondering is why it was there in the first place. They would have been able to simulate its use and efficiency with a sample workload. Still, I won't be one to complain for a killer arch update on the same node.
     
  50. granyte

    granyte ATI+AMD -> DAAMIT

    Reputations:
    357
    Messages:
    2,346
    Likes Received:
    0
    Trophy Points:
    55
    the origin of the problem of inneficency of that desing does not comes from the desing it self but from the fact that games are optimised for nvidia's achitecture so they won't get a hit from that but aplication using stream like F@home use all these because they are made for ATi so try runing a game and try runing F@home and look at the heat output diference

    so these kind of aplication will get a hit but games not likely but they will probably add SIMD block to catch up the total amount of SPU
     
← Previous pageNext page →