Anyways, what I find most pathetic is. Nvidia is comparing their latest that haven't even been released to November 2009 tech. And despite numerous revisions and model name changes, it's still note arguably better.
- All Nvidia did was provide products which could compete with ATi's offerings from November (since the mobile version are just smaller versions of desktop, architecturally unchanged).
- All their GPUs from GTX 480M down to 460M and possibly the rest cost more to manufacture. It costs Nvidia substantially more to manufacture 460M than it does for ATi to make HD5870M. That cost is not generously paid by Nvidia, that cost is thrown back at the consumer!
- ATi HD5xxx series from November 2009 is still more efficient per transistor, per watt than GF104 or GF106!
The GF104 has about 1.95 Billion Transistors. The Cypress (HD5870) has 2.15 Billion Transistors. Yet the Cypress has twice the performance of the GF104.
The GF104 has 200 million less transistors, yet it's larger than the Cypress and costs 20% more to manufacture!
For Nvidia to provide 5-10% more performance than Cypress, they need an additional BILLION transistors to compete
Now I'm certainly bashing Nvidia here. But don't start yelling at me, yell at Nvidia. I just believe in competitively priced, efficient hardware. Why should customers have to pay for Nvidia's financial mistakes both at the store and at home? (Electric Bill).
My guess is you are referring to DARPA? Yeah whatever...
Nvidia Tesla, CUDA is hardly impressive to me in light of recent events.
- Especially when in recent news showed that ATi's measely $600 HD5970 outperforms the $10,000 Tesla...
With AMD's Fusion tech rolling out, you have any doubts that AMD will provide a Accellerated Processing Unit comprised of ATi Stream and AMD Opteron that destroy anything Nvidia can come up with?
Nvidia CANNOT by law make x86, they do not have the license. AMD and Intel would burn them in court if they tried. But AMD can provide Stream Processors (which in some cases annihilates Nvidia CUDA) in combination with x86 architecture.
BTW the fastest Super-Computer right now is comprised of AMD CPUs and IBM Cell Processors.
And to mention, Intel will be watching all of this no doubt. Good luck to Nvidia, but don't see much success, lots of wishful thinking.
-
I've not heard too much about dopings varying wildly or too many varieties of silicon being used (other than strained for high performance). While transistor designs may vary, they're still bound by the library that their fab is offering them. Their dominant logic may differ (domino versus strict complimentary, etc.), but I'm still not aware of wide variances. Perhaps there's more choices for manufacturers than I give them credit for. -
1/ Does this mean a 5870 should have twice the performance of a GTX460, cause that's hardly the case.
2/ Source ? :] now I guess you're talking about GF100 vs 5870.
The rest of your post makes sense but these 2 points you should develop. -
As for ATI outperforming, I would not rule that out but you need to state a source for such claims.
Has this been posted?
Alleged ATI Radeon HD 6870 3DMark Vantage Benchmark leaked by VR-Zone.com -
mobius1aic Notebook Deity NBR Reviewer
'bout bloody time though).
ATi has been well ahead of Nvidia in terms of performance/transistor for quite a while now, but I'm pretty sure much of the extra logic in Nvidia GPUs is for non-graphics related functionality which means the chip suffers from extra power consumption due do to it...... Larger chips and heat output also means thermal management is more important, and it's more likely the Nvidia chips will die earlier than ATi's, especially without adequate cooling devices, which further drive up cost. One last thing to note is that while games tend to favor Nvidia products in so many games, benchmarks like 3DMarkVantage tell a different story. The Radeon 5870 beats the GTX 480 outright in DX10 graphics performance w/o PhysX. While that somewhat puts the GTX 480 out some, it's at least purely a graphical benchmark. However I think it's some evidence that either AMD needs to get it's act together with drivers, or Nvidia just has too many hands in the developer side of things. It's still an interesting look at the performance/watt aspect of the cards. While, yes it's not real world or real gameplay I think it still reflects well on ATi and their ability to design chips that scream performance from as few transistors as possible. A game like Mass Effect 2 shows very similar performance, with a small favor towards the GTX 480 vs the Radeon 5870, and similar performance between the GTX 470 vs the Radeon 5850.
Most of all, I'm interesting in how the medium end products fair, as I'm pining to replace the 5570 in my mostly used desktop with a 57xx or comparable 6xxx series card. However I'm also wondering how the medium end GTS/GTX 4xxs fair, and it's possible I may go with one of them instead to give myself some rudimentary PhysX capability. -
Let's stay focused on the topic of the 6000 series.
There's plenty of opportunities to discuss Nvidia technology elsewhere.
It's safe to assume, that it will come from the desktop 6770? Then I'll hope for desktop 5850 levels of performance.
Mobile GPU power is close to reaching a level which I've only dreamed of us seeing. -
-
So, I doubt whether a desktop 5850 performance can be reached, considering that the manufacturing technology is still 40 nm.
However, now that the GTX 480 M which consumes over 100 W has been released, I'd think the possibilities of a newer radeon card (with approx 100 W power consumption), beating the current GTX 480 M by a good margin is quite high.. Come on ATI!!! (oops, AMD)
-
More benchmarks for the upcoming ATI 6870 done by the same Chinese guy and commented by Fudzilla:
Radeon HD 6800 Series performance benchmarks leaked - Graphics - Graphics - Fudzilla
Looks like Nvidia is R.I.P.
-
-
ati has tellselation since befor nvidia and it was just never used so they could not know it would fail so they likely only have a telselation engine V2 with just fixes and nothing new
-
A single card that can run crysis >40 fps at 1920x1080 very high with 4xAA? Damn. Excuse me while I go to the bathroom.
-
. But they have stood their ground for 3 years, as the first Crysis was released in 2007.
Never played it though. -
-
-
it might be that card to the 48xx serie was using desktop 4800 cores and with 550 clock the 4870 mob was just 75 mhz behind some desktop 4850 and was often oced to that point
or they could release a new card that compete in the moster cathegorie of mobile ~100wat with a 68xx core barely underlocked and rename it 6970mob now imagine a clevo frankeinstein with 2 such card in CFX it could scare even many performance desktop and the price would end quite scary to -
All I want is for the Mob. 6870 to have 1120 or 1440 shaders. Any less will be quite a disappointment. -
@Kevin
1120-1440 shaders with no die-shrink ? Only if they go 100W TDP.
We are all assuming that the above benches are for a 6870 which has the same TDP as the current 5870. But if ATI increased the TDP so it would gain that extra performance then there is little hope for a major gain in performance in the mobile sector.
Only after we see the exact specifications of the 6870 or whatever card was benched in these leaked posts will we be able to judge.
All that these benches tell us now is that Nvidia is really screwed in the desktop market. -
Nothing wrong with a 100W TDP if you bring the performance to go with it. :wink:
-
-
-
I know it's been posted elsewhere on NBR but feel it should be added here too.
ATI Southern Islands codenames spotted in Catalyst 10.8 by VR-Zone.com
Gemini might also have something to do with Hybrid Crossfire with Llano. -
Any news whether they will still be branded ATI or will they be AMD? I know that AMD annouced its dropping the ATI name, but I don't know if it will be by the 6000 series release
-
im guessing AMD branded and if the leaked info on the 6000's is right
Radeon HD 6800 Series performance benchmarks leaked - Graphics - Graphics - Fudzilla
I dont care WHAT they call themselves -
ViciousXUSMC Master Viking NBR Reviewer
Why would you compare a 460 to a 5870? you wouldnt the 480 be the card that compares to the 5870, so then why say SLI 460 competes with Xfire 5870's?
I would need proof to believe those words, and no a few titles that scale better in SLI than CF does not count it has to be consistent to be a true statement as there are always a few titles that work better on one card brand compared to the other, especially those titles "made for nvidia" endorsed.
Also lets not forget the cost/poweruse/heat comparisons of the current ATI/Nvidia cards or the ATI Eyefinity technology vs Nvidia 3D tech. -
Even in said site where the CF looses, their numbers don't match with previous results.
The 460 is a great card, but not that great. Infact, I have seen it batling most against a single 5830 and only in some instances catching to the 5850. the 5870 clearly defeats it. -
well im not sure if the Alienware M15x might be able to support the new 6800 series. i do personally hope they do. hmm
-
Fudzilla tells us that the 6000 series will get better performance per watt.
Radeon HD 6000 is an evolutionary design - Graphics - Fudzilla
Oh yea, add to it 28nm die shrink in late 2011 notebook video cards will rock. -
-
-
-
-
As promised, infractions have been handed out for off-topic flaming and posts have been deleted. Please try and stay on topic, it isn't that hard.
-
-
-
Barts could potentially be the base GPU for mobile Blackcomb. -
Rumor: 6700-series Radeons to have 256-bit memory interfaces - The Tech Report
Midrange 67xx series w/ 256-bit GDDR5? -
-
The TDP is higher than I expected. Might mean that ATI will go 100W TDP when they go mobile (just speculation). I don't mind 100W TDP as long as you can properly underclock/undervolt it to save power when on battery.
-
I've long speculated, that ATI will take advantage of the 100W ceiling, now that NV has raised it.
At least I hope they do. I want them to transition one of the higher-end chips, some 1280 or 1440 shader monster. -
-
-
I guess we will soon find out, whether Clevo has totally abandoned the W870CU.
Heck, depending on where TDP goes, they might become unfeasible for the notebook manufacturers. -
I am still patiently waiting for the 28nm tech
. From the looks of it, I might even skip the whole SB platform and dive straight into Ivy Bridge (22nm).
-
One interesting rumor going around the Beyond3D and Semi|Accurate forums is that, for the 6000 series, AMD may have dropped from a 5-way ALU to a 4-way ALU, and decreased the number of stream processors per SIMD from 80 to 64. I'll readily admit I don't fully understand the concept behind a change like that but the basics of what I get is that since the current design often enough can't use all 5 VLIW units anyway, eliminating one (either the 1 special function/t-unit or 1 of the 4 general units) would increase efficiency and reduce die size without causing a major hit in performance. The benefit from this is to then use the savings in power and die size to add more SIMD, TMU, ROP, wider memory bus, or whatever uncore tweaks, to more than make up for the loss in stream processors.
So for example take the current Cypress XT/HD5870 that has 20 SIMD arrays of 80 ALU (for a total of 1600 SPU), 80 TMU, and 32 ROP....by going to a 4way ALU 20 SIMD arrays of 64 ALU would become 1280 SPU but the TMU and ROP count would stay the same. The resulting GPU would have a die size and power rating somewhere between Juniper (HD5770/Mob. HD5800) and Cypress.
The speculation is that the Barts XT/HD6770 could be that 1280 SPU, 80 TMU, 32 ROP GPU, and that it's performance would be around the level of a desktop HD5830 or HD5850.
These rumors originate from a Chinese site called chiphell.com and are discussed in length on the Beyond3D Forums. The argument against this rumor is that a change like this would not be considered a "minor" tweak to Evergreen. -
Would it also not be discrediting to the rumor the fact that if it was inefficient, it wouldn't have occurred in the first place? I have to imagine the folks at ATI understand workloads placed on them by games by now and would be aware of the lack of return on an overkill ALU.
-
But being that they're stuck at 40nm again, and probably already having compacted all the wasted space out of the node that they possible could, AMD might now be looking back at old inefficiencies that were easier to ignore before.
Again though it's just a rumor....I don't make them up, I just spread them. :wink: -
-
so these kind of aplication will get a hit but games not likely but they will probably add SIMD block to catch up the total amount of SPU
ATi Mobility HD 6000 series Roadmap
Discussion in 'Gaming (Software and Graphics Cards)' started by Arioch, Jun 10, 2010.